Query Text
stringlengths
10
59.9k
Ranking 1
stringlengths
10
4.53k
Ranking 2
stringlengths
10
50.9k
Ranking 3
stringlengths
10
6.78k
Ranking 4
stringlengths
10
59.9k
Ranking 5
stringlengths
10
6.78k
Ranking 6
stringlengths
10
59.9k
Ranking 7
stringlengths
10
59.9k
Ranking 8
stringlengths
10
6.78k
Ranking 9
stringlengths
10
59.9k
Ranking 10
stringlengths
10
50.9k
Ranking 11
stringlengths
13
6.78k
Ranking 12
stringlengths
14
50.9k
Ranking 13
stringlengths
24
2.74k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.07
score_8
float64
0
0.03
score_9
float64
0
0.01
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Overview of the Scalable Video Coding Extension of the H.264/AVC Standard With the introduction of the H.264/AVC video coding standard, significant improvements have recently been demonstrated in video compression capability. The Joint Video Team of the ITU-T VCEG and the ISO/IEC MPEG has now also standardized a Scalable Video Coding (SVC) extension of the H.264/AVC standard. SVC enables the transmission and decoding of partial bit streams to provide video services with lower temporal or spatial resolutions or reduced fidelity while retaining a reconstruction quality that is high relative to the rate of the partial bit streams. Hence, SVC provides functionalities such as graceful degradation in lossy transmission environments as well as bit rate, format, and power adaptation. These functionalities provide enhancements to transmission and storage applications. SVC has achieved significant improvements in coding efficiency with an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. This paper provides an overview of the basic concepts for extending H.264/AVC towards SVC. Moreover, the basic tools for providing temporal, spatial, and quality scalability are described in detail and experimentally analyzed regarding their efficiency and complexity.
The Effects of Priority Levels and Buffering on the Statistical Multiplexing of Single-Layer H.264/AVC and SVC Encoded Video Streams H.264/Advanced Video Coding (AVC) employs classical bi-directional encoded (B) frames that depend only on intra-coded (I) and predictive encoded (P) frames. In contrast, H.264 Scalable Video Coding (SVC) employs hierarchical B frames that depend on other B frames. A fundamental question is how many priority levels single-layer H.264 video encodings require when the encoded frames are statistically multiplexed in transport networks. We conduct extensive simulation experiments with a modular statistical multiplexing structure to uncover the impact of priority levels for a wide range of multiplexing policies. For the bufferless statistical multiplexing of both H.264/AVC and SVC we find that prioritizing the frames according to the number of dependent frames can increase the number of supported streams up to approximately 8%. In contrast, for buffered statistical multiplexing with a relatively small buffer size, frame prioritization does generally not increase the number of supported streams.
On quality of experience of scalable video adaptation. In this paper, we study the quality of experience (QoE) issues in scalable video coding (SVC) for its adaptation in video communications. A QoE assessment database is developed according to SVC scalabilities. Based on the subjective evaluation results, we derive the optimal scalability adaptation track for the individual video and further summarize common scalability adaptation tracks for videos according to their spatial information (SI) and temporal information (TI). Based on the summarized adaptation tracks, we conclude some general guidelines for the effective SVC video adaptation. A rate-QoE model for SVC adaptation is derived accordingly. Experimental results show that the proposed QoE-aware scalability adaptation scheme significantly outperforms the conventional adaptation schemes in terms of QoE. Moreover, the proposed QoE model reflects the rate and QoE relationship in SVC adaptation and thus, provides a useful methodology to estimate video QoE which is important for QoE-aware scalable video streaming. (C) 2013 Elsevier Inc. All rights reserved.
A playback length changeable 3D data segmentation algorithm for scalable 3D video P2P streaming system Scalable 3D video P2P streaming systems can supply diverse 3D experiences for heterogeneous clients with high efficiencies. Data characteristics of the scalable 3D video make the P2P streaming efficiency more depends on the data segmentation algorithm. However, traditional data segmentation algorithm is not very appropriate for scalable 3D video P2P streaming systems. In this paper, we propose a Playback Length Changeable 3D video Segmentation (PLC3DS) algorithm. It considers the particular source-data characteristics of scalable 3D video, and provides different error resilience strengths to video and depth as well as layers with different importance levels in the transmission. The simulation results show that the proposed PLC3DS algorithm can increase the success delivery rates of chunks in more important layers, and further improve the 3D experiences of the client. Moreover, it improves the network utilization ratio remarkably.
QoE-Based SVC Layer Dropping in LTE Networks Using Content-Aware Layer Priorities The increasing popularity of mobile video streaming applications has led to a high volume of video traffic in mobile networks. As the base station, for instance, the eNB in LTE networks, has limited physical resources, it can be overloaded by this traffic. This problem can be addressed by using Scalable Video Coding (SVC), which allows the eNB to drop layers of the video streams to dynamically adapt the bitrate. The impact of bitrate adaptation on the Quality of Experience (QoE) for the users depends on the content characteristics of videos. As the current mobile network architectures do not support the eNB in obtaining video content information, QoE optimization schemes with explicit signaling of content information have been proposed. These schemes, however, require the eNB or a specific optimization module to process the video content on the fly in order to extract the required information. This increases the computation and signaling overhead significantly, raising the OPEX for mobile operators. To address this issue, in this article, a content-aware (CA) priority marking and layer dropping scheme is proposed. The CA priority indicates a transmission order for the layers of all transmitted videos across all users, resulting from a comparison of their utility versus rate characteristics. The CA priority values can be determined at the P-GW on the fly, allowing mobile operators to control the priority marking process. Alternatively, they can be determined offline at the video servers, avoiding real-time computation in the core network. The eNB can perform content-aware SVC layer dropping using only the priority values. No additional content processing is required. The proposed scheme is lightweight both in terms of architecture and computation. The improvement in QoE is substantial and very close to the performance obtained with the computation and signaling-intensive QoE optimization schemes.
Bandwidth-aware multiple multicast tree formation for P2P scalable video streaming using hierarchical clusters Peer-to-peer (P2P) video streaming is a promising method for multimedia distribution over the Internet, yet many problems remain to be solved such as providing the best quality of service to each peer in proportion to its available resources, low-delay, and fault tolerance. In this paper, we propose a new bandwidth-aware multiple multicast tree formation procedure built on top of a hierarchical cluster based P2P overlay architecture for scalable video (SVC) streaming. The tree formation procedure considers number of sources, SVC layer rates available at each source, as well as delay and available bandwidth over links in an attempt to maximize the quality of received video at each peer. Simulations are performed on NS2 with 500 nodes to demonstrate that the overall performance of the system in terms of average received video quality of all peers is significantly better if peers with higher available bandwidth are placed higher up in the trees and peers with lower bandwidth are near the leaves.
Peer-to-Peer Live Multicast: A Video Perspective Peer-to-peer multicast is promising for large-scale streaming video distribution over the Internet. Viewers contribute their resources to a peer-to-peer overlay network to act as relays for the media streams, and no dedicated infrastructure is required. As packets are transmitted over long, unreliable multipeer transmission paths, it is particularly challenging to achieve consistently high video q...
Quality assessment of asymmetric stereo video coding It is well known that the human visual system can perceive high frequencies in 3D, even if that information is present in only one of the views. Therefore, the best 3D stereo quality may be achieved by asymmetric coding where the reference (right) and auxiliary (left) views are coded at unequal PSNR. However, the questions of what should be the level of this asymmetry and whether asymmetry should be achieved by spatial resolution reduction or SNR (quality) reduction are open issues. Extensive subjective tests indicate that when the reference view is encoded at sufficiently high quality, the auxiliary view can be encoded above a low-quality threshold without a noticeable degradation on the perceived stereo video quality. This low-quality threshold may depend on the 3D display; e.g., it is about 31 dB for a parallax barrier display and 33 dB for a polarized projection display. Subjective tests show that, above this PSNR threshold value, users prefer SNR reduction over spatial resolution reduction on both parallax barrier and polarized projection displays. It is also observed that, if the auxiliary view is encoded below this threshold value, symmetric coding starts to perform better than asymmetric coding in terms of perceived 3D video quality.
3D display dependent quality evaluation and rate allocation using scalable video coding It is well known that the human visual system can perceive high frequency content in 3D, even if that information is present in only one of the views. Then, the best 3D perception quality may be achieved by allocating the rates of the reference (right) and auxiliary (left) views asymmetrically. However the question of whether the rate reduction for the auxiliary view should be achieved by spatial resolution reduction (coding a downsampled version of the video followed by upsampling after decoding) or quality (QP) reduction is an open issue. This paper shows that which approach should be preferred depends on the 3D display technology used at the receiver. Subjective tests indicate that users prefer lower quality (larger QP) coding of the auxiliary view over lower resolution coding if a ¿full spatial resolution¿ 3D display technology (such as polarized projection) is employed. On the other hand, users prefer lower resolution coding of the auxiliary view over lower quality coding if a ¿reduced spatial resolution¿ 3D display technology (such as parallax barrier - autostereoscopic) is used. Therefore, we conclude that for 3D IPTV services, while receiving full quality/resolution reference view, users should subscribe to differently scaled versions of the auxiliary view depending on their 3D display technology. We also propose an objective 3D video quality measure that takes the 3D display technology into account.
Toward total quality of experience: A QoE model in a communication ecosystem. In recent years, the quality of experience notion has become a major research theme within the telecommunications community. QoE is an assessment of the human experience when interacting with technology and business entities in a particular context. A communication ecosystem encompasses various domains such as technical aspects, business models, human behavior, and context. For each aspect of a co...
Variations, margins, and statistics Design margining is used to account for design uncertainties in the measurement of performance, and thereby ensures that actual manufactured parts will operate in within predicted bounds. As process and environmental variations become increasingly severe and complex in nanometer process technology, design margining overheads have increased correspondingly. This paper describes the types of process and environmental variations, their impact on performance, and the traditional design margining process used to account for these uncertainties. We consider statistical timing (SSTA) in the context of its ability to reduce timing margins through more accurate modeling of variations, and quantify potential benefits of SSTA for setup and hold time margin reduction. Combining SSTA with complementary techniques for systematic variation-aware and voltage-variation-aware timing provides meaningful design margin reduction. We introduce the concept of activity based operating condition as a supporting construct for variation-aware STA flows
Perturbation of fuzzy reasoning We propose the concepts of maximum and average perturbations of fuzzy sets and estimate maximum and average perturbation parameters for various methods of fuzzy reasoning
Nonparametric sparsity and regularization In this work we are interested in the problems of supervised learning and variable selection when the input-output dependence is described by a nonlinear function depending on a few variables. Our goal is to consider a sparse nonparametric model, hence avoiding linear or additive models. The key idea is to measure the importance of each variable in the model by making use of partial derivatives. Based on this intuition we propose a new notion of nonparametric sparsity and a corresponding least squares regularization scheme. Using concepts and results from the theory of reproducing kernel Hilbert spaces and proximal methods, we show that the proposed learning algorithm corresponds to a minimization problem which can be provably solved by an iterative procedure. The consistency properties of the obtained estimator are studied both in terms of prediction and selection performance. An extensive empirical analysis shows that the proposed method performs favorably with respect to the state-of-the-art methods.
Tracking discontinuities in hyperbolic conservation laws with spectral accuracy It is well known that the spectral solutions of conservation laws have the attractive distinguishing property of infinite-order convergence (also called spectral accuracy) when they are smooth (e.g., [C. Canuto, M.Y. Hussaini, A. Quarteroni, T.A. Zang, Spectral Methods for Fluid Dynamics, Springer-Verlag, Heidelberg, 1988; J.P. Boyd, Chebyshev and Fourier Spectral Methods, second ed., Dover, New York, 2001; C. Canuto, M.Y. Hussaini, A. Quarteroni, T.A. Zang, Spectral Methods: Fundamentals in Single Domains, Springer-Verlag, Berlin Heidelberg, 2006]). If a discontinuity or a shock is present in the solution, this advantage is lost. There have been attempts to recover exponential convergence in such cases with rather limited success. The aim of this paper is to propose a discontinuous spectral element method coupled with a level set procedure, which tracks discontinuities in the solution of nonlinear hyperbolic conservation laws with spectral convergence in space. Spectral convergence is demonstrated in the case of the inviscid Burgers equation and the one-dimensional Euler equations.
1.004191
0.007536
0.005797
0.005797
0.003478
0.003274
0.002544
0.001257
0.000549
0.000026
0
0
0
0
Karhunen-Loève expansion revisited for vector-valued random fields: Scaling, errors and optimal basis. Due to scaling effects, when dealing with vector-valued random fields, the classical Karhunen-Loeve expansion, which is optimal with respect to the total mean square error, tends to favorize the components of the random field that have the highest signal energy. When these random fields are to be used in mechanical systems, this phenomenon can introduce undesired biases for the results. This paper presents therefore an adaptation of the Karhunen-Loeve expansion that allows us to control these biases and to minimize them. This original decomposition is first analyzed from a theoretical point of view, and is then illustrated on a numerical example.
A multilevel finite element method for Fredholm integral eigenvalue problems In this work, we proposed a multigrid finite element (MFE) method for solving the Fredholm integral eigenvalue problems. The main motivation for such studies is to compute the Karhunen-Loève expansions of random fields, which play an important role in the applications of uncertainty quantification. In our MFE framework, solving the eigenvalue problem is converted to doing a series of integral iterations and eigenvalue solving in the coarsest mesh. Then, any existing efficient integration scheme can be used for the associated integration process. The error estimates are provided, and the computational complexity is analyzed. It is noticed that the total computational work of our method is comparable with a single integration step in the finest mesh. Several numerical experiments are presented to validate the efficiency of the proposed numerical method.
Polynomial Chaos Expansion of a Multimodal Random Vector A methodology and algorithms are proposed for constructing the polynomial chaos expansion (PCE) of multimodal random vectors. An algorithm is developed for generating independent realizations of any multimodal multivariate probability measure that is constructed from a set of independent realizations using the Gaussian kernel-density estimation method. The PCE is then performed with respect to this multimodal probability measure, for which the realizations of the polynomial chaos are computed with an adapted algorithm. Finally, a numerical application is presented for analyzing the convergence properties.
Basis adaptation in homogeneous chaos spaces We present a new method for the characterization of subspaces associated with low-dimensional quantities of interest (QoI). The probability density function of these QoI is found to be concentrated around one-dimensional subspaces for which we develop projection operators. Our approach builds on the properties of Gaussian Hilbert spaces and associated tensor product spaces.
Application of polynomial chaos in stability and control The polynomial chaos of Wiener provides a framework for the statistical analysis of dynamical systems, with computational cost far superior to Monte Carlo simulations. It is a useful tool for control systems analysis because it allows probabilistic description of the effects of uncertainty, especially in systems having nonlinearities and where other techniques, such as Lyapunov's method, may fail. We show that stability of a system can be inferred from the evolution of modal amplitudes, covering nearly the full support of the uncertain parameters with a finite series. By casting uncertain parameters as unknown gains, we show that the separation of stochastic from deterministic elements in the response points to fast iterative design methods for nonlinear control.
Identification of Bayesian posteriors for coefficients of chaos expansions This article is concerned with the identification of probabilistic characterizations of random variables and fields from experimental data. The data used for the identification consist of measurements of several realizations of the uncertain quantities that must be characterized. The random variables and fields are approximated by a polynomial chaos expansion, and the coefficients of this expansion are viewed as unknown parameters to be identified. It is shown how the Bayesian paradigm can be applied to formulate and solve the inverse problem. The estimated polynomial chaos coefficients are hereby themselves characterized as random variables whose probability density function is the Bayesian posterior. This allows to quantify the impact of missing experimental information on the accuracy of the identified coefficients, as well as on subsequent predictions. An illustration in stochastic aeroelastic stability analysis is provided to demonstrate the proposed methodology.
Spectral Polynomial Chaos Solutions of the Stochastic Advection Equation We present a new algorithm based on Wiener–Hermite functionals combined with Fourier collocation to solve the advection equation with stochastic transport velocity. We develop different stategies of representing the stochastic input, and demonstrate that this approach is orders of magnitude more efficient than Monte Carlo simulations for comparable accuracy.
First-order incremental block-based statistical timing analysis Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.
The optimality conditions for optimization problems with convex constraints and multiple fuzzy-valued objective functions The optimality conditions for multiobjective programming problems with fuzzy-valued objective functions are derived in this paper. The solution concepts for these kinds of problems will follow the concept of nondominated solution adopted in the multiobjective programming problems. In order to consider the differentiation of fuzzy-valued functions, we invoke the Hausdorff metric to define the distance between two fuzzy numbers and the Hukuhara difference to define the difference of two fuzzy numbers. Under these settings, the optimality conditions for obtaining the (strongly, weakly) Pareto optimal solutions are elicited naturally by introducing the Lagrange multipliers.
Randomized rounding: a technique for provably good algorithms and algorithmic proofs We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.
Compound Linguistic Scale. •Compound Linguistic Scale comprises Compound Linguistic Variable, Fuzzy Normal Distribution and Deductive Rating Strategy.•CLV can produce two dimensional options, i.e. compound linguistic terms, to better reflect the raters’ preferences.•DRS is a double step rating approach for a rater to choose a compound linguistic term among two dimensional options.•FND can efficiently produce a population of fuzzy numbers for a linguistic term set with using a few parameters.•CLS, as a rating interface, can be contributed to various application domains in engineer and social sciences.
An approach based on Takagi-Sugeno Fuzzy Inference System applied to the operation planning of hydrothermal systems The operation planning in hydrothermal systems with great hydraulic participation, as it is the case of Brazilian system, seeks to determine an operation policy to specify how hydroelectric plants should be operated, in order to use the hydroelectric resources economically and reliably. This paper presents an application of Takagi-Sugeno Fuzzy Inference Systems to obtain an operation policy (PBFIS Policy Based on Fuzzy Inference Systems) that follows the principles of the optimized operation of reservoirs for electric power generation. PBFIS is obtained through the application of an optimization algorithm for the operation of hydroelectric plants. From this optimization the relationships between the stored energy of the system and the volume of the reservoir of each plant are extracted. These relationships are represented in the consequent parameters of the fuzzy linguistic rules. Thus, PBFIS is used to estimate the operative volume of each hydroelectric plant, based on the value of the energy stored in the system. In order to verify the effectiveness of PBFIS, a computer simulation model of the operation of hydroelectric plants was used so as to compare it with the operation policy in parallel; with the operation policy based on functional approximations; and also with the result obtained through the application of the optimization of individualized plants' operation. With the proposed methodology, we try to demonstrate the viability of PBFIS' obtainment and application, and with the obtained results, we intend to illustrate the effectiveness and the gains which came from it.
Properties of Atanassov's intuitionistic fuzzy relations and Atanassov's operators The goal of this paper is to consider properties of Atanassov's intuitionistic fuzzy relations which were introduced by Atanassov in 1986. Fuzzy set theory turned out to be a useful tool to describe situations in which the data are imprecise or vague. Atanassov's intuitionistic fuzzy set theory is a generalization of fuzzy set theory which was introduced by Zadeh in 1965. This paper is a continuation of examinations by Pe@?kala [22] on the interval-valued fuzzy relations. We study standard properties of Atanassov's intuitionistic fuzzy relations in the context of Atanassov's operators.
Performance and Quality Evaluation of a Personalized Route Planning System Advanced personalization of database applications is a big challenge, in particular for distributed mo- bile environments. We present several new results from a prototype of a route planning system. We demonstrate how to combine qualitative and quantitative preferences gained from situational aspects and from personal user preferences. For performance studies we a nalyze the runtime efficiency of the SR-Combine algorithm used to evaluate top-k queries. By determining the cost-ratio of random to sorted accesses SR-Combine can automati- cally tune its performance within the given system architecture. Top-k queries are generated by mapping linguis- tic variables to numerical weightings. Moreover, we analyze the quality of the query results by several test se- ries, systematically varying the mappings of the linguistic variables. We report interesting insights into this rather under-researched important topic. More investigations, incorporating also cognitive issues, need to be conducted in the future.
1.071111
0.066667
0.04
0.014444
0.003175
0.000857
0.00001
0
0
0
0
0
0
0
Depth Reconstruction Filter and Down/Up Sampling for Depth Coding in 3-D Video A depth image represents three-dimensional (3-D) scene information and is commonly used for depth image-based rendering (DIBR) to support 3-D video and free-viewpoint video applications. The virtual view is generally rendered by the DIBR technique and its quality depends highly on the quality of depth image. Thus, efficient depth coding is crucial to realize the 3-D video system. In this letter, w...
Shape-adaptivewavelet encoding of depth maps We present a novel depth-map codec aimed at free-viewpoint 3DTV. The proposed codec relies on a shape-adaptive wavelet transform and an explicit representation of the locations of major depth edges. Unlike classical wavelet transforms, the shape-adaptive transform generates small wavelet coefficients along depth edges, which greatly reduces the data entropy. The wavelet transform is implemented by shape-adaptive lifting, which enables fast computations and perfect reconstruction. We also develop a novel rate-constrained edge detection algorithm, which integrates the idea of significance bitplanes into the Canny edge detector. Along with a simple chain code, it provides an efficient way to extract and encode edges. Experimental results on synthetic and real data confirm the effectiveness of the proposed algorithm, with PSNR gains of 5 dB and more over the Middlebury dataset.
View Synthesis for Advanced 3D Video Systems Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD) is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.
H.264-Based depth map sequence coding using motion information of corresponding texture video Three-dimensional television systems using depth-image-based rendering techniques are attractive in recent years. In those systems, a monoscopic two-dimensional texture video and its associated depth map sequence are transmitted. In order to utilize transmission bandwidth and storage space efficiently, the depth map sequence should be compressed as well as the texture video. Among previous works for depth map sequence coding, H.264 has shown the best performance; however, it has some disadvantages of requiring long encoding time and high encoder cost. In this paper, we propose a new coding structure for depth map coding with H.264 so as to reduce encoding time significantly while maintaining high compression efficiency. Instead of estimating motion vectors directly in the depth map, we generate candidate motion modes by exploiting motion information of the corresponding texture video. Experimental results show that the proposed algorithm reduces the complexity to 60% of the previous scheme that encodes two sequences separately and coding performance is also improved up to 1dB at low bit rates.
View synthesis prediction for multiview video coding We propose a rate-distortion-optimized framework that incorporates view synthesis for improved prediction in multiview video coding. In the proposed scheme, auxiliary information, including depth data, is encoded and used at the decoder to generate the view synthesis prediction data. The proposed method employs optimal mode decision including view synthesis prediction, and sub-pixel reference matching to improve prediction accuracy of the view synthesis prediction. Novel variants of the skip and direct modes are also presented, which infer the depth and correction vector information from neighboring blocks in a synthesized reference picture to reduce the bits needed for the view synthesis prediction mode. We demonstrate two multiview video coding scenarios in which view synthesis prediction is employed. In the first scenario, the goal is to improve the coding efficiency of multiview video where block-based depths and correction vectors are encoded by CABAC in a lossless manner on a macroblock basis. A variable block-size depth/motion search algorithm is described. Experimental results demonstrate that view synthesis prediction does provide some coding gains when combined with disparity-compensated prediction. In the second scenario, the goal is to use view synthesis prediction for reducing rate overhead incurred by transmitting depth maps for improved support of 3DTV and free-viewpoint video applications. It is assumed that the complete depth map for each view is encoded separately from the multiview video and used at the receiver to generate intermediate views. We utilize this information for view synthesis prediction to improve overall coding efficiency. Experimental results show that the rate overhead incurred by coding depth maps of varying quality could be offset by utilizing the proposed view synthesis prediction techniques to reduce the bitrate required for coding multiview video.
3-D Video Representation Using Depth Maps Current 3-D video (3DV) technology is based on stereo systems. These systems use stereo video coding for pictures delivered by two input cameras. Typically, such stereo systems only reproduce these two camera views at the receiver and stereoscopic displays for multiple viewers require wearing special 3-D glasses. On the other hand, emerging autostereoscopic multiview displays emit a large numbers of views to enable 3-D viewing for multiple users without requiring 3-D glasses. For representing a large number of views, a multiview extension of stereo video coding is used, typically requiring a bit rate that is proportional to the number of views. However, since the quality improvement of multiview displays will be governed by an increase of emitted views, a format is needed that allows the generation of arbitrary numbers of views with the transmission bit rate being constant. Such a format is the combination of video signals and associated depth maps. The depth maps provide disparities associated with every sample of the video signal that can be used to render arbitrary numbers of additional views via view synthesis. This paper describes efficient coding methods for video and depth data. For the generation of views, synthesis methods are presented, which mitigate errors from depth estimation and coding.
A multi-stream adaptation framework for bandwidth management in 3D tele-immersion Tele-immersive environments will improve the state of collaboration among distributed participants. However, along with the promise a new set of challenges have emerged including the real-time acquisition, streaming and rendering of 3D scenes to convey a realistic sense of immersive spaces. Unlike 2D video conferencing, a 3D tele-immersive environment employs multiple 3D cameras to cover a much wider field of view, thus generating a very large volume of data that need to be carefully coordinated, organized, and synchronized for Internet transmission, rendering and display. This is a challenging task and a dynamic bandwidth management must be in place. To achieve this goal, we propose a multi-stream adaptation framework for bandwidth management in 3D tele-immersion. The adaptation framework relies on the hierarchy of mechanisms and services that exploits the semantic link of multiple 3D video streams in the tele-immersive environment. We implement a prototype of the framework that integrates semantic stream selection, content adaptation, and 3D data compression services with user preference. The experimental results have demonstrated that the framework shows a good quality of the resulting composite 3D rendered video in case of sufficient bandwidth, while it adapts individual 3D video streams in a coordinated and user-friendly fashion, and yields graceful quality degradation in case of low bandwidth availability.
Generic segment-wise DC for 3D-HEVC depth intra coding In 3D extension of HEVC (High Efficiency Video Coding), namely, 3D-HEVC, segment-wise DC coding (SDC) was adopted to more efficiently represent the depth residual for Intra coded depth blocks. Instead of coding pixel-wise residual as in HEVC, SDC codes one DC residual value for each segment of a Prediction Unit (PU) and skips transform and quantization. SDC was originally proposed for only a couple of modes, including the DC mode, Planar mode and depth modeling mode (DMM), which has an arbitrary straight line separation of a PU. This paper proposes a generic SDC method that applies to the conventional angular Intra modes. For each depth prediction unit coded with Intra prediction mode, encoder can adaptively choose to code pixel-wise residual or segment-wise residual to achieve better compression efficiency. Experimental results show that proposed method can reduce the total bit rate by about 1% even though the depth views altogether consumes relatively low percentage of the total bit rate.
Network emulation in the VINT/NS simulator Employing an emulation capability in network simulation provides the ability for real-world traffic to interact with a simulation. The benefits of emulation include the ability to expose experimental algorithms and protocols to live traffic loads, and to test real-world protocol implementations against repeatable interference generated in simulation. This paper describes the design and implementation of the emulation facility in the NS simulator a commonly-used publicly available network research simulator
Probe Design for Compressive Sensing DNA Microarrays Compressive sensing microarrays (CSM) are DNA-based sensors that operate using the principle of compressive sensing (CS). In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM each sensor responds to a group of targets. We study the problem of designing CS probes that simultaneously account for both the constraints from group testing theory and the biochemistry of probe-target DNA hybridization. Our results show that, in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, experiments show that out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications for which only short hybridization times are allowed.
Fuzzy linguistic logic programming and its applications The paper introduces fuzzy linguistic logic programming, which is a combination of fuzzy logic programming, introduced by P. Vojtáš, and hedge algebras in order to facilitate the representation and reasoning on human knowledge expressed in natural languages. In fuzzy linguistic logic programming, truth values are linguistic ones, e.g., VeryTrue, VeryProbablyTrue and LittleFalse, taken from a hedge algebra of a linguistic truth variable, and linguistic hedges (modifiers) can be used as unary connectives in formulae. This is motivated by the fact that humans reason mostly in terms of linguistic terms rather than in terms of numbers, and linguistic hedges are often used in natural languages to express different levels of emphasis. The paper presents: (a) the language of fuzzy linguistic logic programming; (b) a declarative semantics in terms of Herbrand interpretations and models; (c) a procedural semantics which directly manipulates linguistic terms to compute a lower bound to the truth value of a query, and proves its soundness; (d) a fixpoint semantics of logic programs, and based on it, proves the completeness of the procedural semantics; (e) several applications of fuzzy linguistic logic programming; and (f) an idea of implementing a system to execute fuzzy linguistic logic programs.
Cross-Layer Design For Mobile Ad Hoc Networks Using Interval Type-2 Fuzzy Logic Systems In this paper, we introduce a new method for packet transmission delay analysis and prediction in mobile ad hoc networks. We apply a fuzzy logic system (FLS) to coordinate physical layer and data link layer. We demonstrate that type-2 fuzzy membership function (MF), i.e., the Gaussian MFs with uncertain variance is most appropriate to model BER and MAC layer service time. Two FLSs: a singleton type-1 FLS and an interval type-2 FLS are designed to predict the packet transmission delay based on the BER and MAC layer service time. Simulation results show that the interval type-2 FLS performs much better than the type-1 FLS in transimission delay prediction. We use the forecasted transimission delay to adjust the transmission power, and it shows that the interval type-2 FLS performs much better than a type-1 FLS in terms of energy consumption, average delay and throughput. Besides, we obtain the performance bound based on the actual transmission delay.
A decision making model for the Taiwanese shipping logistics company in china to select the container distribution center location The purpose of this paper is to propose a decision making model for the Taiwanese shipping logistics company in China to select the best container distribution center location. The representation of multiplication operation on fuzzy numbers is useful for the decision makers to solve the fuzzy multiple criteria decision making problems of container distribution center location selection. In the past, few papers discussed the representation of multiplication operation on multiple fuzzy numbers. Thus this paper first compute and obtain the representation of multiplication operation on multiple fuzzy numbers. Based on this representation, the decision maker can rank quickly the ordering of each alternative location and then select easily the best one. Finally, the representation of multiplication operation on multiple fuzzy numbers is applied to solve the fuzzy multiple criteria decision making problem of container distribution center location selection in China.
A hierarchy of subgraphs underlying a timing graph and its use in capturing topological correlation in SSTA This paper shows that a timing graph has a hierarchy of specially defined subgraphs, based on which we present a technique that captures topological correlation in arbitrary block-based statistical static timing analysis (SSTA). We interpret a timing graph as an algebraic expression made up of addition and maximum operators. We define the division operation on the expression and propose algorithms that modify factors in the expression without expansion. As a result, they produce an expression to derive the latest arrival time with better accuracy in SSTA. Existing techniques handling reconvergent fanouts usually use dependency lists, requiring quadratic space complexity. Instead, the proposed technique has linear space complexity by using a new directed acyclic graph search algorithm. Our results show that it outperforms an existing technique in speed and memory usage with comparable accuracy.
1.051733
0.050474
0.030317
0.025313
0.016872
0.004688
0.000289
0.00008
0.000008
0
0
0
0
0
A fast hierarchical algorithm for 3-D capacitance extraction We presen t a new algorithm for computing the capacitance of three-dimensional perfect electrical conductors of complex structures. The new algorithm is significantly faster and uses muc h less memory than previous best algorithms, and is kernel independent.The new algorithm is based on a hierarchical algorithm for the n-body problem, and is an acceleration of the boundary-element method for solving the integral equation associated with the capacitance extraction problem. The algorithm first adaptively subdivides the conductor surfaces into panels according to an estimation of the potential coefficients and a user-supplied error band. The algorithm stores the poten tial coefficient matrix in a hierarchical data structure of size O(n), although the matrix is size n2 if expanded explicitly, wheren is the n umber of panels. The hierarchical data structure allows us to multiply the coefficient matrix with an y vector in O(n) time. Finally, w e use a generalized minimal residual algorithm to solve m linear systems each of size n × n in O(mn) time, where m is the n umber of conductors.The new algorithm is implemented and the performance is compared with previous best algorithms. F or the k × k bus example, our algorithm is 100 to 40 times faster than F astCap, and uses 1/100 to 1/60 of the memory used by F astCap. The results computed by the new algorithm are within 2.7% from that computed by FastCap.
Impedance extraction for 3-D structures with multiple dielectrics using preconditioned boundary element method In this paper, we present the first BEM impedance extraction algorithm for multiple dielectrics. The effect of multiple dielectrics is significant and efficient modeling is challenging. However, previous BEM algorithms, including FastImp and FastPep, assume uniform dielectric, thus causing considerable errors. The new algorithm introduces a circuit formulation which makes it possible to utilizes either multilayer Green's function or equivalent charge method to extract impedance in multiple dielectrics. The novelty of the formulation is the reduction of the number of unknowns and the application of the hierarchical data structure. The hierarchical data structure permits efficient sparsification transformation and preconditioners to accelerate the linear equation solver. Experimental results demonstrate that the new algorithm is accurate and efficient. For uniform dielectric problems, the new algorithm is one magnitude faster than FastImp, while its results differ from FastImp within 2%. For multiple dielectrics problems, its relative error with respect to HFSS is below 3%.
Fast Analysis of a Large-Scale Inductive Interconnect by Block-Structure-Preserved Macromodeling To efficiently analyze the large-scale interconnect dominant circuits with inductive couplings (mutual inductances), this paper introduces a new state matrix, called VNA, to stamp inverse-inductance elements by replacing inductive-branch current with flux. The state matrix under VNA is diagonal-dominant, sparse, and passive. To further explore the sparsity and hierarchy at the block level, a new matrix-stretching method is introduced to reorder coupled fluxes into a decoupled state matrix with a bordered block diagonal (BBD) structure. A corresponding block-structure-preserved model-order reduction, called BVOR, is developed to preserve the sparsity and hierarchy of the BBD matrix at the block level. This enables us to efficiently build and simulate the macromodel within a SPICE-like circuit simulator. Experiments show that our method achieves up to 7× faster modeling building time, up to 33× faster simulation time, and as much as 67× smaller waveform error compared to SAPOR [a second-order reduction based on nodal analysis (NA)] and PACT (a first-order 2×2 structured reduction based on modified NA).
A parallel hashed oct-tree N-body algorithm The authors report on an efficient adaptive N-body method which we have recently designed and implemented. The algorithm computes the forces on an arbitrary distribution of bodies in a time which scales as N log N with the particle number. The accuracy of the force calculations is analytically bounded, and can be adjusted via a user defined parameter between a few percent relative accuracy, down to machine arithmetic accuracy. Instead of using pointers to indicate the topology of the tree, the authors identify each possible cell with a key. The mapping of keys into memory locations is achieved via a hash table. This allows the program to access data in an efficient manner across multiple processors. Performance of the parallel program is measured on the 512 processor Intel Touchstone Delta system. Comments on a number of wide-ranging applications which can benefit from application of this type of algorithm are included.
Recent computational developments in Krylov subspace methods for linear systems Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters. Copyright (c) 2006 John Wiley & Sons, Ltd.
A stochastic integral equation method for modeling the rough surface effect on interconnect capacitance In This work we describe a stochastic integral equation method for computing the mean value and the variance of capacitance of interconnects with random surface roughness. An ensemble average Green's function is combined with a matrix Neumann expansion to compute nominal capacitance and its variance. This method avoids the time-consuming Monte Carlo simulations and the discretization of rough surfaces. Numerical experiments show that the results of the new method agree very well with Monte Carlo simulation results.
Asymptotic probability extraction for non-normal distributions of circuit performance While process variations are becoming more significant with each new IC technology generation, they are often modeled via linear regression models so that the resulting performance variations can be captured via normal distributions. Nonlinear (e.g. quadratic) response surface models can be utilized to capture larger scale process variations; however, such models result in non-normal distributions for circuit performance which are difficult to capture since the distribution model is unknown. In this paper we propose an asymptotic probability extraction method, APEX, for estimating the unknown random distribution when using nonlinear response surface modeling. APEX first uses a binomial moment evaluation to efficiently compute the high order moments of the unknown distribution, and then applies moment matching to approximate the characteristic function of the random circuit performance by an efficient rational function. A simple statistical timing example and an analog circuit example demonstrate that APEX can provide better accuracy than Monte Carlo simulation with 10 samples and achieve orders of magnitude more efficiency. We also show the error incurred by the popular normal modeling assumption using standard IC technologies.
Guaranteed passive balancing transformations for model order reduction The major concerns in state-of-the-art model reduction algorithms are: achieving accurate models of sufficiently small size, numerically stable and efficient generation of the models, and preservation of system properties such as passivity. Algorithms, such as PRIMA, generate guaranteed-passive models for systems with special internal structure, using numerically stable and efficient Krylov-subspace iterations. Truncated balanced realization (TBR) algorithms, as used to date in the design automation community, can achieve smaller models with better error control, but do not necessarily preserve passivity. In this paper, we show how to construct TBR-like methods that generate guaranteed passive reduced models and in addition are applicable to state-space systems with arbitrary internal structure.
Stochastic Sparse-grid Collocation Algorithm (SSCA) for Periodic Steady-State Analysis of Nonlinear System with Process Variations In this paper, stochastic collocation algorithm combined with sparse grid technique (SSCA) is proposed to deal with the periodic steady-state analysis for nonlinear systems with process variations. Compared to the existing approaches, SSCA has several considerable merits. Firstly, compared with the moment-matching parameterized model order reduction (PMOR), which equally treats the circuit response on process variables and frequency parameter by Taylor approximation, SSCA employs homogeneous chaos to capture the impact of process variations with exponential convergence rate and adopts Fourier series or wavelet bases to model the steady-state behavior in time domain. Secondly, contrary to stochastic Galerkin algorithm (SGA), which is efficient for stochastic linear system analysis, the complexity of SSCA is much smaller than that of SGA for nonlinear case. Thirdly, different from efficient collocation method, the heuristic approach which may results in "rank deficient problem" and "Runge phenomenon", sparse grid technique is developed to select the collocation points in SSCA in order to reduce the complexity while guaranteing the approximation accuracy. Furthermore, though SSCA is proposed for the stochastic nonlinear steady-state analysis, it can be applied for any other kinds of nonlinear system simulation with process variations, such as transient analysis, etc.
Macromodel Generation for BioMEMS Components Using a Stabilized Balanced Truncation Plus Trajectory Piecewise-Linear Approach In this paper, we present a technique for automatically extracting nonlinear macromodels of biomedical microelectromechanical systems devices from physical simulation. The technique is a modification of the recently developed trajectory piecewise-linear approach, but uses ideas from balanced truncation to produce much lower order and more accurate models. The key result is a perturbation analysis of an instability problem with the reduction algorithm, and a simple modification that makes the algorithm more robust. Results are presented from examples to demonstrate dramatic improvements in reduced model accuracy and show the limitations of the method.
Statistical crosstalk aggressor alignment aware interconnect delay calculation Crosstalk aggressor alignment induces significant interconnect delay variation and needs to be taken into account in a statistical timer. In this paper, we approximate crosstalk aggressor alignment induced interconnect delay variation in a piecewise-quadratic function, and present closed form formulas for statistical interconnect delay calculation with crosstalk aggressor alignment variation. Our proposed method can be easily integrated in a statistical timer, where traditional corner-based timing windows are replaced by probabilistic distributions of crosstalk aggressor alignment, which can be refined by similar delay calculation iterations. Runtime is O(N) for initial delay calculation of N sampling crosstalk aggressor alignments, while pdf propagation and delay updating requires constant time. We compare with SPICE Monte Carlo simulations on Berkeley predictive model 70nm global interconnect structures and 130nm industry design instances. Our experimental results show that crosstalk aggressor alignment oblivious statistical delay calculation could lead to up to 114.65% (71.26%) mismatch of interconnect delay means (standard deviations), while our method gives output signal arrival time means (standard deviations) within 2.09% (3.38%) of SPICE Monte Carlo simulation results.
The effects of multiview depth video compression on multiview rendering This article investigates the interaction between different techniques for depth compression and view synthesis rendering with multiview video plus scene depth data. Two different approaches for depth coding are compared, namely H.264/MVC, using temporal and inter-view reference images for efficient prediction, and the novel platelet-based coding algorithm, characterized by being adapted to the special characteristics of depth-images. Since depth-images are a 2D representation of the 3D scene geometry, depth-image errors lead to geometry distortions. Therefore, the influence of geometry distortions resulting from coding artifacts is evaluated for both coding approaches in two different ways. First, the variation of 3D surface meshes is analyzed using the Hausdorff distance and second, the distortion is evaluated for 2D view synthesis rendering, where color and depth information are used together to render virtual intermediate camera views of the scene. The results show that-although its rate-distortion (R-D) performance is worse-platelet-based depth coding outperforms H.264, due to improved sharp edge preservation. Therefore, depth coding needs to be evaluated with respect to geometry distortions.
An approximation to the computational theory of perceptions using ontologies New technologies allow users to access huge amounts of data about phenomena in their environment. Nevertheless, linguistic description of these available data requires that human experts interpret them highlighting the relevant details and hiding the irrelevant ones. Experts use their personal experience on the described phenomenon and in using the flexibility of natural language to create their reports. In the research line of Computing with Words and Perceptions, this paper deals with the challenge of using ontologies to create a computational representation of the expert's knowledge including his/her experience on both the context of the analyzed phenomenon and his/her personal use of language in that specific context. The proposed representation takes as basis the Granular Linguistic Model of a Phenomenon previously proposed by two of the authors. Our approach is explained and demonstrated using a series of practical prototypes with increasing degree of complexity.
Analyzing parliamentary elections based on voting advice application data The main goal of this paper is to model the values of Finnish citizens and the members of the parliament. To achieve this goal, two databases are combined: voting advice application data and the results of the parliamentary elections in 2011. First, the data is converted to a high-dimension space. Then, it is projected to two principal components. The projection allows us to visualize the main differences between the parties. The value grids are produced with a kernel density estimation method without explicitly using the questions of the voting advice application. However, we find meaningful interpretations for the axes in the visualizations with the analyzed data. Subsequently, all candidate value grids are weighted by the results of the parliamentary elections. The result can be interpreted as a distribution grid for Finnish voters' values.
1.024584
0.019189
0.019189
0.013249
0.007889
0.003452
0.001281
0.000222
0.000052
0.000009
0
0
0
0
Compressed Remote Sensing of Sparse Objects The linear inverse source and scattering problems are studied from the perspective of compressed sensing. By introducing the sensor as well as target ensembles, the maximum number of recoverable targets is proved to be at least proportional to the number of measurement data modulo a log-square factor with overwhelming probability. Important contributions include the discoveries of the threshold aperture, consistent with the classical Rayleigh criterion, and the incoherence effect induced by random antenna locations. The predictions of theorems are confirmed by numerical simulations.
Sensitivity to basis mismatch in compressed sensing Compressed sensing theory suggests that successful inversion of an image of the physical world from its modal parameters can be achieved at measurement dimensions far lower than the image dimension, provided that the image is sparse in an a priori known basis. The assumed basis for sparsity typically corresponds to a gridding of the parameter space, e.g., an DFT grid in spectrum analysis. However, in reality no physical field is sparse in the DFT basis or in an a priori known basis. No matter how finely we grid the parameter space the sources may not lie in the center of the grid cells and there is always mismatch between the assumed and the actual bases for sparsity. In this paper, we study the sensitivity of compressed sensing (basis pursuit to be exact) to mismatch between the assumed and the actual sparsity bases. Our mathematical analysis and numerical examples show that the performance of basis pursuit degrades considerably in the presence of basis mismatch.
Multiarray signal processing: Tensor decomposition meets compressed sensing We discuss how recently discovered techniques and tools from compressed sensing can be used in tensor decompositions, with a view towards modeling signals from multiple arrays of multiple sensors. We show that with appropriate bounds on a measure of separation between radiating sources called coherence, one could always guarantee the existence and uniqueness of a best rank-r approximation of the tensor representing the signal. We also deduce a computationally feasible variant of Kruskal's uniqueness condition, where the coherence appears as a proxy for k-rank. Problems of sparsest recovery with an infinite continuous dictionary, lowest-rank tensor representation, and blind source separation are treated in a uniform fashion. The decomposition of the measurement tensor leads to simultaneous localization and extraction of radiating sources, in an entirely deterministic manner.
Construction of a Large Class of Deterministic Sensing Matrices That Satisfy a Statistical Isometry Property Compressed Sensing aims to capture attributes of k-sparse signals using very few measurements. In the standard compressed sensing paradigm, the N ?? C measurement matrix ?? is required to act as a near isometry on the set of all k-sparse signals (restricted isometry property or RIP). Although it is known that certain probabilistic processes generate N ?? C matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix ?? has this property, crucial for the feasibility of the standard recovery algorithms. In contrast, this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of k-sparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. An essential element in our construction is that we require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sub-linear in C, and only quadratic in N, as compared to the super-linear complexity in C of the Basis Pursuit or Matching Pursuit algorithms; the focus on expected performance is more typical of mainstream signal processing than the worst case analysis that prevails in standard compressed sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and extended BCH codes.
Signal Recovery From Incomplete and Inaccurate Measurements Via Regularized Orthogonal Matching Pursuit We demonstrate a simple greedy algorithm that can reliably recover a vector v ¿ ¿d from incomplete and inaccurate measurements x = ¿v + e. Here, ¿ is a N x d measurement matrix with N<<d, and e is an error vector. Our algorithm, Regularized Orthogonal Matching Pursuit (ROMP), seeks to provide the benefits of the two major approaches to sparse recovery. It combines the speed and ease of implementat...
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images A full-rank matrix ${\bf A}\in \mathbb{R}^{n\times m}$ with $n Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several well-known signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable, but there is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems have energized research on such signal and image processing problems—to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical results on sparse modeling of signals and images, and recent applications in inverse problems and compression in image processing. This work lies at the intersection of signal processing and applied mathematics, and arose initially from the wavelets and harmonic analysis research communities. The aim of this paper is to introduce a few key notions and applications connected to sparsity, targeting newcomers interested in either the mathematical aspects of this area or its applications.
Just relax: convex programming methods for identifying sparse signals in noise This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis
Error correction via linear programming Suppose we wish to transmit a vector f 2 Rn reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are corrupted in a completely arbitrary fashion. We do not know which entries are aected nor do we know how they are aected. Is it possible to recover f exactly from the corrupted m-dimensional vector y? This paper proves that under suitable conditions on the coding matrix A, the input f is the unique solution to the '1-minimization problem (kxk'1 := P i |xi|)
Compressed sensing of analog signals in shift-invariant spaces A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for low-rate sampling of continuous-time sparse signals in shift-invariant (SI) spaces, generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distingnishing feature of our results is that in contrast to standard CS, which treats finite-length vectors, we consider sampling of analog signals for which no underlying finite-dimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.
Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.
Toward extended fuzzy logic—A first step Fuzzy logic adds to bivalent logic an important capability—a capability to reason precisely with imperfect information. Imperfect information is information which in one or more respects is imprecise, uncertain, incomplete, unreliable, vague or partially true. In fuzzy logic, results of reasoning are expected to be provably valid, or p-valid for short. Extended fuzzy logic adds an equally important capability—a capability to reason imprecisely with imperfect information. This capability comes into play when precise reasoning is infeasible, excessively costly or unneeded. In extended fuzzy logic, p-validity of results is desirable but not required. What is admissible is a mode of reasoning which is fuzzily valid, or f-valid for short. Actually, much of everyday human reasoning is f-valid reasoning.
Group decision making process for supplier selection with VIKOR under fuzzy environment During recent years, how to determine suitable suppliers in the supply chain has become a key strategic consideration. However, the nature of supplier selection is a complex multi-criteria problem including both quantitative and qualitative factors which may be in conflict and may also be uncertain. The VIKOR method was developed to solve multiple criteria decision making (MCDM) problems with conflicting and non-commensurable (different units) criteria, assuming that compromising is acceptable for conflict resolution, the decision maker wants a solution that is the closest to the ideal, and the alternatives are evaluated according to all established criteria. In this paper, linguistic values are used to assess the ratings and weights for these factors. These linguistic ratings can be expressed in trapezoidal or triangular fuzzy numbers. Then, a hierarchy MCDM model based on fuzzy sets theory and VIKOR method is proposed to deal with the supplier selection problems in the supply chain system. A numerical example is proposed to illustrate an application of the proposed model.
Methods for finding frequent items in data streams The frequent items problem is to process a stream of items and find all items occurring more than a given fraction of the time. It is one of the most heavily studied problems in data stream mining, dating back to the 1980s. Many applications rely directly or indirectly on finding the frequent items, and implementations are in use in large scale industrial systems. However, there has not been much comparison of the different methods under uniform experimental conditions. It is common to find papers touching on this topic in which important related work is mischaracterized, overlooked, or reinvented. In this paper, we aim to present the most important algorithms for this problem in a common framework. We have created baseline implementations of the algorithms and used these to perform a thorough experimental study of their properties. We give empirical evidence that there is considerable variation in the performance of frequent items algorithms. The best methods can be implemented to find frequent items with high accuracy using only tens of kilobytes of memory, at rates of millions of items per second on cheap modern hardware.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.080263
0.055263
0.05
0.009211
0.006542
0.000883
0.000081
0.000001
0
0
0
0
0
0
Multidimensional Adaptive Relevance Vector Machines for Uncertainty Quantification. We develop a Bayesian uncertainty quantification framework using a local binary tree surrogate model that is able to make use of arbitrary Bayesian regression methods. The tree is adaptively constructed using information about the sensitivity of the response and is biased by the underlying input probability distribution. The local Bayesian regressions are based on a reformulation of the relevance vector machine model that accounts for the multiple output dimensions. A fast algorithm for training the local models is provided. The methodology is demonstrated with examples in the solution of stochastic differential equations.
Exploiting active subspaces to quantify uncertainty in the numerical simulation of the HyShot II scramjet. We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.
Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate Modeling and Uncertainty Quantification. We are interested in the development of surrogate models for uncertainty quantification and propagation in problems governed by stochastic PDEs using a deep convolutional encoder–decoder network in a similar fashion to approaches considered in deep learning for image-to-image regression tasks. Since normal neural networks are data-intensive and cannot provide predictive uncertainty, we propose a Bayesian approach to convolutional neural nets. A recently introduced variational gradient descent algorithm based on Stein's method is scaled to deep convolutional networks to perform approximate Bayesian inference on millions of uncertain network parameters. This approach achieves state of the art performance in terms of predictive accuracy and uncertainty quantification in comparison to other approaches in Bayesian neural networks as well as techniques that include Gaussian processes and ensemble methods even when the training data size is relatively small. To evaluate the performance of this approach, we consider standard uncertainty quantification tasks for flow in heterogeneous media using limited training data consisting of permeability realizations and the corresponding velocity and pressure fields. The performance of the surrogate model developed is very good even though there is no underlying structure shared between the input (permeability) and output (flow/pressure) fields as is often the case in the image-to-image regression models used in computer vision problems. Studies are performed with an underlying stochastic input dimensionality up to 4225 where most other uncertainty quantification methods fail. Uncertainty propagation tasks are considered and the predictive output Bayesian statistics are compared to those obtained with Monte Carlo estimates.
Multi-output separable Gaussian process: Towards an efficient, fully Bayesian paradigm for uncertainty quantification Computer codes simulating physical systems usually have responses that consist of a set of distinct outputs (e.g., velocity and pressure) that evolve also in space and time and depend on many unknown input parameters (e.g., physical constants, initial/boundary conditions, etc.). Furthermore, essential engineering procedures such as uncertainty quantification, inverse problems or design are notoriously difficult to carry out mostly due to the limited simulations available. The aim of this work is to introduce a fully Bayesian approach for treating these problems which accounts for the uncertainty induced by the finite number of observations. Our model is built on a multi-dimensional Gaussian process that explicitly treats correlations between distinct output variables as well as space and/or time. The proper use of a separable covariance function enables us to describe the huge covariance matrix as a Kronecker product of smaller matrices leading to efficient algorithms for carrying out inference and predictions. The novelty of this work, is the recognition that the Gaussian process model defines a posterior probability measure on the function space of possible surrogates for the computer code and the derivation of an algorithmic procedure that allows us to sample it efficiently. We demonstrate how the scheme can be used in uncertainty quantification tasks in order to obtain error bars for the statistics of interest that account for the finite number of observations.
Uncertainty quantification via random domain decomposition and probabilistic collocation on sparse grids Quantitative predictions of the behavior of many deterministic systems are uncertain due to ubiquitous heterogeneity and insufficient characterization by data. We present a computational approach to quantify predictive uncertainty in complex phenomena, which is modeled by (partial) differential equations with uncertain parameters exhibiting multi-scale variability. The approach is motivated by flow in random composites whose internal architecture (spatial arrangement of constitutive materials) and spatial variability of properties of each material are both uncertain. The proposed two-scale framework combines a random domain decomposition (RDD) and a probabilistic collocation method (PCM) on sparse grids to quantify these two sources of uncertainty, respectively. The use of sparse grid points significantly reduces the overall computational cost, especially for random processes with small correlation lengths. A series of one-, two-, and three-dimensional computational examples demonstrate that the combined RDD-PCM approach yields efficient, robust and non-intrusive approximations for the statistics of diffusion in random composites.
An Anisotropic Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data This work proposes and analyzes an anisotropic sparse grid stochastic collocation method for solving partial differential equations with random coefficients and forcing terms (input data of the model). The method consists of a Galerkin approximation in the space variables and a collocation, in probability space, on sparse tensor product grids utilizing either Clenshaw-Curtis or Gaussian knots. Even in the presence of nonlinearities, the collocation approach leads to the solution of uncoupled deterministic problems, just as in the Monte Carlo method. This work includes a priori and a posteriori procedures to adapt the anisotropy of the sparse grids to each given problem. These procedures seem to be very effective for the problems under study. The proposed method combines the advantages of isotropic sparse collocation with those of anisotropic full tensor product collocation: the first approach is effective for problems depending on random variables which weigh approximately equally in the solution, while the benefits of the latter approach become apparent when solving highly anisotropic problems depending on a relatively small number of random variables, as in the case where input random variables are Karhunen-Loève truncations of “smooth” random fields. This work also provides a rigorous convergence analysis of the fully discrete problem and demonstrates (sub)exponential convergence in the asymptotic regime and algebraic convergence in the preasymptotic regime, with respect to the total number of collocation points. It also shows that the anisotropic approximation breaks the curse of dimensionality for a wide set of problems. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo. In particular, for moderately large-dimensional problems, the sparse grid approach with a properly chosen anisotropy seems to be very efficient and superior to all examined methods.
Physical Systems with Random Uncertainties: Chaos Representations with Arbitrary Probability Measure The basic random variables on which random uncertainties can in a given model depend can be viewed as defining a measure space with respect to which the solution to the mathematical problem can be defined. This measure space is defined on a product measure associated with the collection of basic random variables. This paper clarifies the mathematical structure of this space and its relationship to the underlying spaces associated with each of the random variables. Cases of both dependent and independent basic random variables are addressed. Bases on the product space are developed that can be viewed as generalizations of the standard polynomial chaos approximation. Moreover, two numerical constructions of approximations in this space are presented along with the associated convergence analysis.
Learning and classification of monotonic ordinal concepts
Proactive secret sharing or: How to cope with perpetual leakage Secret sharing schemes protect secrets by distributing them over different locations (share holders). In particular, in k out of n threshold schemes, security is assured if throughout the entire life-time of the secret the adversary is restricted to compromise less than k of the n locations. For long-lived and sensitive secrets this protection may be insufficient. We propose an efficient proactive secret sharing scheme, where shares are periodically renewed (without changing the secret) in such it way that information gained by the adversary in one time period is useless for attacking the secret after the shares are renewed. Hence, the adversary willing to learn the secret needs to break to all k locations during the same time period (e.g., one day, a week, etc.). Furthermore, in order to guarantee the availability and integrity of the secret, we provide mechanisms to detect maliciously (or accidentally) corrupted shares, as well as mechanisms to secretly recover the correct shares when modification is detected.
Incremental criticality and yield gradients Criticality and yield gradients are two crucial diagnostic metrics obtained from Statistical Static Timing Analysis (SSTA). They provide valuable information to guide timing optimization and timing-driven physical synthesis. Existing work in the literature, however, computes both metrics in a non-incremental manner, i.e., after one or more changes are made in a previously-timed circuit, both metrics need to be recomputed from scratch, which is obviously undesirable for optimizing large circuits. The major contribution of this paper is to propose two novel techniques to compute both criticality and yield gradients efficiently and incrementally. In addition, while node and edge criticalities are addressed in the literature, this paper for the first time describes a technique to compute path criticalities. To further improve algorithmic efficiency, this paper also proposes a novel technique to update "chip slack" incrementally. Numerical results show our methods to be over two orders of magnitude faster than previous work.
Compressive speech enhancement This paper presents an alternative approach to speech enhancement by using compressed sensing (CS). CS is a new sampling theory, which states that sparse signals can be reconstructed from far fewer measurements than the Nyquist sampling. As such, CS can be exploited to reconstruct only the sparse components (e.g., speech) from the mixture of sparse and non-sparse components (e.g., noise). This is possible because in a time-frequency representation, speech signal is sparse whilst most noise is non-sparse. Derivation shows that on average the signal to noise ratio (SNR) in the compressed domain is greater or equal than the uncompressed domain. Experimental results concur with the derivation and the proposed CS scheme achieves better or similar perceptual evaluation of speech quality (PESQ) scores and segmental SNR compared to other conventional methods in a wide range of input SNR.
Hierarchical statistical characterization of mixed-signal circuits using behavioral modeling A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented. The methodology uses principal component analysis, response surface methodology, and statistics to directly calculate the statistical distributions of higher-level parameters from the distributions of lower-level parameters. We have used the methodology to characterize a folded cascode operational amplifier and a phase-locked loop. This methodology permits the statistical characterization of large analog and mixed-signal systems, many of which are extremely time-consuming or impossible to characterize using existing methods.
Dominance-based fuzzy rough set analysis of uncertain and possibilistic data tables In this paper, we propose a dominance-based fuzzy rough set approach for the decision analysis of a preference-ordered uncertain or possibilistic data table, which is comprised of a finite set of objects described by a finite set of criteria. The domains of the criteria may have ordinal properties that express preference scales. In the proposed approach, we first compute the degree of dominance between any two objects based on their imprecise evaluations with respect to each criterion. This results in a valued dominance relation on the universe. Then, we define the degree of adherence to the dominance principle by every pair of objects and the degree of consistency of each object. The consistency degrees of all objects are aggregated to derive the quality of the classification, which we use to define the reducts of a data table. In addition, the upward and downward unions of decision classes are fuzzy subsets of the universe. Thus, the lower and upper approximations of the decision classes based on the valued dominance relation are fuzzy rough sets. By using the lower approximations of the decision classes, we can derive two types of decision rules that can be applied to new decision cases.
Performance and Quality Evaluation of a Personalized Route Planning System Advanced personalization of database applications is a big challenge, in particular for distributed mo- bile environments. We present several new results from a prototype of a route planning system. We demonstrate how to combine qualitative and quantitative preferences gained from situational aspects and from personal user preferences. For performance studies we a nalyze the runtime efficiency of the SR-Combine algorithm used to evaluate top-k queries. By determining the cost-ratio of random to sorted accesses SR-Combine can automati- cally tune its performance within the given system architecture. Top-k queries are generated by mapping linguis- tic variables to numerical weightings. Moreover, we analyze the quality of the query results by several test se- ries, systematically varying the mappings of the linguistic variables. We report interesting insights into this rather under-researched important topic. More investigations, incorporating also cognitive issues, need to be conducted in the future.
1.24
0.24
0.12
0.0325
0.006667
0.000393
0.000002
0
0
0
0
0
0
0
Restricted Isometries for Partial Random Circulant Matrices In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the sth-order restricted isometry constant is small when the number m of samples satisfies m≳(slogn)3/2, where n is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling.
Sparsity lower bounds for dimensionality reducing maps We give near-tight lower bounds for the sparsity required in several dimensionality reducing linear maps. First, consider the Johnson-Lindenstrauss (JL) lemma which states that for any set of n vectors in Rd there is an A∈Rm x d with m = O(ε-2log n) such that mapping by A preserves the pairwise Euclidean distances up to a 1 pm ε factor. We show there exists a set of n vectors such that any such A with at most s non-zero entries per column must have s = Ω(ε-1log n/log(1/ε)) if m -2, ε-1√(logm d)) by [Dasgupta-Kumar-Sarlos, STOC 2010], which only held against the stronger property of distributional JL, and only against a certain restricted class of distributions. Meanwhile our lower bound is against the JL lemma itself, with no restrictions. Our lower bound matches the sparse JL upper bound of [Kane-Nelson, SODA 2012] up to an O(log(1/ε)) factor. Next, we show that any m x n matrix with the k-restricted isometry property (RIP) with constant distortion must have Ω(k log(n/k)) non-zeroes per column if m=O(k log (n/k)), the optimal number of rows for RIP, and k n. This improves the previous lower bound of Ω(min{k, n/m}) by [Chandar, 2010] and shows that for most k it is impossible to have a sparse RIP matrix with an optimal number of rows. Both lower bounds above also offer a tradeoff between sparsity and the number of rows. Lastly, we show that any oblivious distribution over subspace embedding matrices with 1 non-zero per column and preserving distances in a d dimensional-subspace up to a constant factor must have at least Ω(d2) rows. This matches an upper bound in [Nelson-Nguyên, arXiv abs/1211.1002] and shows the impossibility of obtaining the best of both of constructions in that work, namely 1 non-zero per column and d ⋅ polylog d rows.
New constructions of RIP matrices with fast multiplication and fewer rows In this paper, we present novel constructions of matrices with the restricted isometry property (RIP) that support fast matrix-vector multiplication. Our guarantees are the best known, and can also be used to obtain the best known guarantees for fast Johnson Lindenstrauss transforms. In compressed sensing, the restricted isometry property is a sufficient condition for the efficient reconstruction of a nearly k-sparse vector x ε Cd from m linear measurements Φx. It is desirable for m to be small, and further it is desirable for Φ to support fast matrix-vector multiplication. Among other applications, fast multiplication improves the runtime of iterative recovery algorithms which repeatedly multiply by Φ or Φ*. The main contribution of this work is a novel randomized construction of RIP matrices Φ ε Cmxd, preserving the &ell;2 norms of all k-sparse vectors with distortion 1 + &ell;, where the matrix-vector multiply Φx can be computed in nearly linear time. The number of rows m is on the order of ε-2k log dlog2(kl oge d), an improvement on previous analyses by a logarithmic factor. Our construction, together with a connection between RIP matrices and the Johnson-Lindenstrauss lemma in [Krahmer-Ward, SIAM. J. Math. Anal. 2011], also implies fast Johnson-Lindenstrauss embeddings with asymptotically fewer rows than previously known. Our construction is actually a recipe for improving any existing family of RIP matrices. Briefly, we apply an appropriate sparse hash matrix with sign flips to any suitable family of RIP matrices. We show that the embedding properties of the original family are maintained, while at the same time improving the number of rows. The main tool in our analysis is a recent bound for the supremum of certain types of Rademacher chaos processes in [Krahmer-Mendelson-Rauhut, Comm. Pure Appl. Math. to appear].
Performance Bounds for Expander-Based Compressed Sensing in Poisson Noise This paper provides performance bounds for compressed sensing in the presence of Poisson noise using expander graphs. The Poisson noise model is appropriate for a variety of applications, including low-light imaging and digital streaming, where the signal-independent and/or bounded noise models used in the compressed sensing literature are no longer applicable. In this paper, we develop a novel sensing paradigm based on expander graphs and propose a maximum a posteriori (MAP) algorithm for recovering sparse or compressible signals from Poisson observations. The geometry of the expander graphs and the positivity of the corresponding sensing matrices play a crucial role in establishing the bounds on the signal reconstruction error of the proposed algorithm. We support our results with experimental demonstrations of reconstructing average packet arrival rates and instantaneous packet counts at a router in a communication network, where the arrivals of packets in each flow follow a Poisson process.
New Bounds for Restricted Isometry Constants This paper discusses new bounds for restricted isometry constants in compressed sensing. Let Φ be an n × p real matrix and A; be a positive integer with k ≤ n. One of the main results of this paper shows that if the restricted isometry constant δk of Φ satisfies δk <; 0.307 then k-sparse signals are guaranteed to be recovered exactly via ℓ1 minimization when no noise is present and k-sparse signals can be estimated stably in the noisy case. It is also shown that the bound cannot be substantially improved. An explicit example is constructed in which δk = k-1/2k-1 <; 0.5, but it is impossible to recover certain k-sparse signals.
The Gelfand widths of lp-balls for 0 We provide sharp lower and upper bounds for the Gelfand widths of @?"p-balls in the N-dimensional @?"q^N-space for 0
CoSaMP: iterative signal recovery from incomplete and inaccurate samples Compressive sampling (CoSa) is a new paradigm for developing data sampling technologies. It is based on the principle that many types of vector-space data are compressible, which is a term of art in mathematical signal processing. The key ideas are that randomized dimension reduction preserves the information in a compressible signal and that it is possible to develop hardware devices that implement this dimension reduction efficiently. The main computational challenge in CoSa is to reconstruct a compressible signal from the reduced representation acquired by the sampling device. This extended abstract describes a recent algorithm, called, CoSaMP, that accomplishes the data recovery task. It was the first known method to offer near-optimal guarantees on resource usage.
Greed is good: algorithmic results for sparse approximation This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms.
Fast Solution of l1-Norm Minimization Problems When the Solution May Be Sparse The minimum lscr1-norm solution to an underdetermined system of linear equations y=Ax is often, remarkably, also the sparsest solution to that system. This sparsity-seeking property is of interest in signal processing and information transmission. However, general-purpose optimizers are much too slow for lscr1 minimization in many large-scale applications.In this paper, the Homotopy method, origin...
Average case complexity of multivariate integration for smooth functions We study the average case complexity of multivariate integration for the class of smooth functions equipped with the folded Wiener sheet measure. The complexity is derived by reducing this problem to multivariate integration in the worst case setting but for a different space of functions. Fully constructive optimal information and an optimal algorithm are presented. Next, fully constructive almost optimal information and an almost optimal algorithm are also presented which have some advantages for practical implementation.
The Vienna Definition Language
Increasing energy efficiency in sensor networks: blue noise sampling and non-convex matrix completion The energy cost of a sensor network is dominated by the data acquisition and communication cost of individual sensors. At each sampling instant it is unnecessary to sample and communicate the data at all sensors since the data is highly redundant. We find that, if only (random) subset of the sensors acquires and transmits the sample values, it is possible to estimate the sample values at all the sensors under certain realistic assumptions. Since only a subset of all the sensors is active at each sampling instant, the energy cost of the network is reduced over time. When the sensor nodes are assumed to lie on a regular rectangular grid, the problem can be recast as a low-rank matrix completion problem. Current theoretical work on matrix completion relies on purely random sampling strategies and convex estimation algorithms. In this work, we will empirically show that better reconstruction results are obtained when more sophisticated sampling schemes are used followed by non-convex matrix completion algorithms. We find that the proposed approach gives surprisingly good results.
Reweighted minimization model for MR image reconstruction with split Bregman method. Magnetic resonance (MR) image reconstruction is to get a practicable gray-scale image from few frequency domain coefficients. In this paper, different reweighted minimization models for MR image reconstruction are studied, and a novel model named reweighted wavelet+TV minimization model is proposed. By using split Bregman method, an iteration minimization algorithm for solving this new model is obtained, and its convergence is established. Numerical simulations show that the proposed model and its algorithm are feasible and highly efficient.
Fuzzy OWA model for information security risk management One of the methods for information security risk assessment is the substantiated choice and realization of countermeasures against threats. A situational fuzzy OWA model of a multicriteria decision making problem concerning the choice of countermeasures for reducing information security risks is proposed. The proposed model makes it possible to modify the associated weights of criteria based on the information entropy with respect to the aggregation situation. The advantage of the model is the continuous improvement of the weights of the criteria and the aggregation of experts’ opinions depending on the parameter characterizing the aggregation situation.
1.056661
0.062496
0.062496
0.051652
0.012913
0.006944
0.000773
0.000094
0.000008
0
0
0
0
0
Duality Theory in Fuzzy Optimization Problems A solution concept of fuzzy optimization problems, which is essentially similar to the notion of Pareto optimal solution (nondominated solution) in multiobjective programming problems, is introduced by imposing a partial ordering on the set of all fuzzy numbers. We also introduce a concept of fuzzy scalar (inner) product based on the positive and negative parts of fuzzy numbers. Then the fuzzy-valued Lagrangian function and the fuzzy-valued Lagrangian dual function for the fuzzy optimization problem are proposed via the concept of fuzzy scalar product. Under these settings, the weak and strong duality theorems for fuzzy optimization problems can be elicited. We show that there is no duality gap between the primal and dual fuzzy optimization problems under suitable assumptions for fuzzy-valued functions.
Duality theory in fuzzy optimization problems formulated by the Wolfe's primal and dual pair The weak and strong duality theorems in fuzzy optimization problem based on the formulation of Wolfe's primal and dual pair problems are derived in this paper. The solution concepts of primal and dual problems are inspired by the nondominated solution concept employed in multiobjective programming problems, since the ordering among the fuzzy numbers introduced in this paper is a partial ordering. In order to consider the differentiation of a fuzzy-valued function, we invoke the Hausdorff metric to define the distance between two fuzzy numbers and the Hukuhara difference to define the difference of two fuzzy numbers. Under these settings, the Wolfe's dual problem can be formulated by considering the gradients of differentiable fuzzy- valued functions. The concept of having no duality gap in weak and strong sense are also introduced, and the strong duality theorems in weak and strong sense are then derived naturally.
Scalarization of the fuzzy optimization problems Scalarization of the fuzzy optimization problems using the embedding theorem and the concept of convex cone (ordering cone) is proposed in this paper. Two solution concepts are proposed by considering two convex cones. The set of all fuzzy numbers can be embedded into a normed space. This motivation naturally inspires us to invoke the scalarization techniques in vector optimization problems to solve the fuzzy optimization problems. By applying scalarization to the optimization problem with fuzzy coefficients, we obtain its corresponding scalar optimization problem. Finally, we show that the optimal solution of its corresponding scalar optimization problem is the optimal solution of the original fuzzy optimization problem.
Duality Theory in Fuzzy Linear Programming Problems with Fuzzy Coefficients The concept of fuzzy scalar (inner) product that will be used in the fuzzy objective and inequality constraints of the fuzzy primal and dual linear programming problems with fuzzy coefficients is proposed in this paper. We also introduce a solution concept that is essentially similar to the notion of Pareto optimal solution in the multiobjective programming problems by imposing a partial ordering on the set of all fuzzy numbers. We then prove the weak and strong duality theorems for fuzzy linear programming problems with fuzzy coefficients.
The Vienna Definition Language
Fuzzy algorithms
First-order incremental block-based statistical timing analysis Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.
Tensor rank is NP-complete We prove that computing the rank of a three-dimensional tensor over any finite field is NP-complete. Over the rational numbers the problem is NP-hard.
A model based on linguistic 2-tuples for dealing with multigranular hierarchical linguistic contexts in multi-expert decision-making In those problems that deal with multiple sources of linguistic information we can find problems defined in contexts where the linguistic assessments are assessed in linguistic term sets with different granularity of uncertainty and/or semantics (multigranular linguistic contexts). Different approaches have been developed to manage this type of contexts, that unify the multigranular linguistic information in an unique linguistic term set for an easy management of the information. This normalization process can produce a loss of information and hence a lack of precision in the final results. In this paper, we shall present a type of multigranular linguistic contexts we shall call linguistic hierarchies term sets, such that, when we deal with multigranular linguistic information assessed in these structures we can unify the information assessed in them without loss of information. To do so, we shall use the 2-tuple linguistic representation model. Afterwards we shall develop a linguistic decision model dealing with multigranular linguistic contexts and apply it to a multi-expert decision-making problem
User profiles and fuzzy logic for web retrieval issues We present a study of the role of user profiles using fuzzy logic in web retrieval processes. Flexibility for user interaction and for adaptation in profile construction becomes an important issue. We focus our study on user profiles, including creation, modification, storage, clustering and interpretation. We also consider the role of fuzzy logic and other soft computing techniques to improve user profiles. Extended profiles contain additional information related to the user that can be used to personalize and customize the retrieval process as well as the web site. Web mining processes can be carried out by means of fuzzy clustering of these extended profiles and fuzzy rule construction. Fuzzy inference can be used in order to modify queries and extract knowledge from profiles with marketing purposes within a web framework. An architecture of a portal that could support web mining technology is also presented.
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an $n$-dimensional vector “decouples” as $n$ scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
User impatience and network performance In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
Fuzzy modeling of system behavior for risk and reliability analysis The main objective of the article is to permit the reliability analyst's/engineers/managers/practitioners to analyze the failure behavior of a system in a more consistent and logical manner. To this effect, the authors propose a methodological and structured framework, which makes use of both qualitative and quantitative techniques for risk and reliability analysis of the system. The framework has been applied to model and analyze a complex industrial system from a paper mill. In the quantitative framework, after developing the Petrinet model of the system, the fuzzy synthesis of failure and repair data (using fuzzy arithmetic operations) has been done. Various system parameters of managerial importance such as repair time, failure rate, mean time between failures, availability, and expected number of failures are computed to quantify the behavior in terms of fuzzy, crisp and defuzzified values. Further, to improve upon the reliability and maintainability characteristics of the system, in depth qualitative analysis of systems is carried out using failure mode and effect analysis (FMEA) by listing out all possible failure modes, their causes and effect on system performance. To address the limitations of traditional FMEA method based on risky priority number score, a risk ranking approach based on fuzzy and Grey relational analysis is proposed to prioritize failure causes.
Split Bregman iterative algorithm for sparse reconstruction of electrical impedance tomography In this paper, we present an evaluation of the use of split Bregman iterative algorithm for the L"1-norm regularized inverse problem of electrical impedance tomography. Simulations are performed to validate that our algorithm is competitive in terms of the imaging quality and computational speed in comparison with several state-of-the-art algorithms. Results also indicate that in contrast to the conventional L"2-norm regularization method and total variation (TV) regularization method, the L"1-norm regularization method can sharpen the edges and is more robust against data noises.
1.050579
0.072727
0.072727
0.03624
0.000008
0.000003
0
0
0
0
0
0
0
0
Generalised Polynomial Chaos for a Class of Linear Conservation Laws Mathematical modelling of dynamical systems often yields partial differential equations (PDEs) in time and space, which represent a conservation law possibly including a source term. Uncertainties in physical parameters can be described by random variables. To resolve the stochastic model, the Galerkin technique of the generalised polynomial chaos results in a larger coupled system of PDEs. We consider a certain class of linear systems of conservation laws, which exhibit a hyperbolic structure. Accordingly, we analyse the hyperbolicity of the corresponding coupled system of linear conservation laws from the polynomial chaos. Numerical results of two illustrative examples are presented.
A Well-Balanced Stochastic Galerkin Method for Scalar Hyperbolic Balance Laws with Random Inputs We propose a generalized polynomial chaos based stochastic Galerkin methods for scalar hyperbolic balance laws with random geometric source terms or random initial data. This method is well-balanced (WB), in the sense that it captures the stochastic steady state solution with high order accuracy. The framework of the stochastic WB schemes is presented in details, along with several numerical examples to illustrate their accuracy and effectiveness. The goal of this paper is to show that the stochastic WB scheme yields a more accurate numerical solution at steady state than the non-WB ones.
Multi-level Monte Carlo finite volume methods for nonlinear systems of conservation laws in multi-dimensions We extend the multi-level Monte Carlo (MLMC) in order to quantify uncertainty in the solutions of multi-dimensional hyperbolic systems of conservation laws with uncertain initial data. The algorithm is presented and several issues arising in the massively parallel numerical implementation are addressed. In particular, we present a novel load balancing procedure that ensures scalability of the MLMC algorithm on massively parallel hardware. A new code is described and applied to simulate uncertain solutions of the Euler equations and ideal magnetohydrodynamics (MHD) equations. Numerical experiments showing the robustness, efficiency and scalability of the proposed algorithm are presented.
A semi-intrusive deterministic approach to uncertainty quantification in non-linear fluid flow problems. This paper deals with the formulation of a semi-intrusive (SI) method allowing the computation of statistics of linear and non linear PDEs solutions. This method shows to be very efficient to deal with probability density function of whatsoever form, long-term integration and discontinuities in stochastic space.Given a stochastic PDE where randomness is defined on ¿, starting from (i) a description of the solution in term of a space variables, (ii) a numerical scheme defined for any event ω ¿ ¿ and (iii) a (family) of random variables that may be correlated, the solution is numerically described by its conditional expectancies of point values or cell averages and its evaluation constructed from the deterministic scheme. One of the tools is a tessellation of the random space as in finite volume methods for the space variables. Then, using these conditional expectancies and the geometrical description of the tessellation, a piecewise polynomial approximation in the random variables is computed using a reconstruction method that is standard for high order finite volume space, except that the measure is no longer the standard Lebesgue measure but the probability measure. This reconstruction is then used to formulate a scheme on the numerical approximation of the solution from the deterministic scheme. This new approach is said semi-intrusive because it requires only a limited amount of modification in a deterministic solver to quantify uncertainty on the state when the solver includes uncertain variables.The effectiveness of this method is illustrated for a modified version of Kraichnan-Orszag three-mode problem where a discontinuous pdf is associated to the stochastic variable, and for a nozzle flow with shocks. The results have been analyzed in terms of accuracy and probability measure flexibility. Finally, the importance of the probabilistic reconstruction in the stochastic space is shown up on an example where the exact solution is computable, the viscous Burgers equation.
A convergence study for SPDEs using combined Polynomial Chaos and Dynamically-Orthogonal schemes. We study the convergence properties of the recently developed Dynamically Orthogonal (DO) field equations [1] in comparison with the Polynomial Chaos (PC) method. To this end, we consider a series of one-dimensional prototype SPDEs, whose solution can be expressed analytically, and which are associated with both linear (advection equation) and nonlinear (Burgers equation) problems with excitations that lead to unimodal and strongly bi-modal distributions. We also propose a hybrid approach to tackle the singular limit of the DO equations for the case of deterministic initial conditions. The results reveal that the DO method converges exponentially fast with respect to the number of modes (for the problems considered) giving same levels of computational accuracy comparable with the PC method but (in many cases) with substantially smaller computational cost compared to stochastic collocation, especially when the involved parametric space is high-dimensional.
A dynamically bi-orthogonal method for time-dependent stochastic partial differential equations I: Derivation and algorithms We propose a dynamically bi-orthogonal method (DyBO) to solve time dependent stochastic partial differential equations (SPDEs). The objective of our method is to exploit some intrinsic sparse structure in the stochastic solution by constructing the sparsest representation of the stochastic solution via a bi-orthogonal basis. It is well-known that the Karhunen-Loeve expansion (KLE) minimizes the total mean squared error and gives the sparsest representation of stochastic solutions. However, the computation of the KL expansion could be quite expensive since we need to form a covariance matrix and solve a large-scale eigenvalue problem. The main contribution of this paper is that we derive an equivalent system that governs the evolution of the spatial and stochastic basis in the KL expansion. Unlike other reduced model methods, our method constructs the reduced basis on-the-fly without the need to form the covariance matrix or to compute its eigendecomposition. In the first part of our paper, we introduce the derivation of the dynamically bi-orthogonal formulation for SPDEs, discuss several theoretical issues, such as the dynamic bi-orthogonality preservation and some preliminary error analysis of the DyBO method. We also give some numerical implementation details of the DyBO methods, including the representation of stochastic basis and techniques to deal with eigenvalue crossing. In the second part of our paper [11], we will present an adaptive strategy to dynamically remove or add modes, perform a detailed complexity analysis, and discuss various generalizations of this approach. An extensive range of numerical experiments will be provided in both parts to demonstrate the effectiveness of the DyBO method.
A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data Abstract In this paper we propose and analyze a Stochastic-Collocation method to solve elliptic Partial Difierential Equations with random,coe‐cients and forcing terms (input data of the model). The input data are assumed to depend on a flnite number of random,variables. The method consists in a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space and naturally leads to the solution of uncoupled deterministic prob- lems as in the Monte Carlo approach. It can be seen as a generalization of the Stochastic Galerkin method proposed in [Babu• ska -Tempone-Zouraris, SIAM J. Num. Anal. 42(2004)] and allows one to treat easily a wider range of situations, such as: input data that depend non-linearly on the random variables, difiusivity coe‐cients with unbounded second moments , random variables that are correlated or have unbounded support. We provide a rigorous convergence analysis and demonstrate exponential con- vergence of the \probability error" with respect of the number of Gauss points in each direction in the probability space, under some regularity assumptions on the random,input data. Numerical examples show the efiectiveness of the method. Key words: Collocation method, stochastic PDEs, flnite elements, un- certainty quantiflcation, exponential convergence. AMS subject classiflcation: 65N35, 65N15, 65C20
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
A generic quantitative relationship between quality of experience and quality of service Quality of experience ties together user perception, experience, and expectations to application and network performance, typically expressed by quality of service parameters. Quantitative relationships between QoE and QoS are required in order to be able to build effective QoE control mechanisms onto measurable QoS parameters. Against this background, this article proposes a generic formula in which QoE and QoS parameters are connected through an exponential relationship, called IQX hypothesis. The formula relates changes of QoE with respect to QoS to the current level of QoE, is simple to match, and its limit behaviors are straightforward to interpret. It validates the IQX hypothesis for streaming services, where QoE in terms of Mean Opinion Scores is expressed as functions of loss and reordering ratio, the latter of which is caused by jitter. For web surfing as the second application area, matchings provided by the IQX hypothesis are shown to outperform previously published logarithmic functions. We conclude that the IQX hypothesis is a strong candidate to be taken into account when deriving relationships between QoE and QoS parameters.
Semantics of concurrent systems: a modular fixed-point trace approach A method for finding the set of processes generated by a concurrent system (the behaviour of a system) in modular way is presented. A system is decomposed into modules with behaviours assumed to be known and then the behaviours are successively put together giving finally the initial system behaviour. It is shown that there is much of freedom in choice of modules; in extreme case atoms of a system, i.e. subsystems containing only one resource, can be taken as modules; each atom has its behaviour defined a proiri. The basic operation used for composing behaviours is the synchronization operation defined in the paper. The fixed point method of describing sets of processes is extensively applied, with processes regarded as traces rather than strings of actions.
Distributed Compressive Sensing Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intra- and inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.
Analyzing FD inference in relational databases Imprecise inference models the ability to infer sets of values or information chunks. Imprecise databaseinference is just as important as precise inference. In fact, it is more prevalent than its precise counterparteven in precise databases. Analyzing the extent of imprecise inference is important in knowledge discoveryand database security. Imprecise inference analysis can be used to &quot;mine&quot; rule-based knowledge from databasedata. In database security, imprecise inference analysis can help...
Variation-aware interconnect extraction using statistical moment preserving model order reduction In this paper we present a stochastic model order reduction technique for interconnect extraction in the presence of process variabilities, i.e. variation-aware extraction. It is becoming increasingly evident that sampling based methods for variation-aware extraction are more efficient than more computationally complex techniques such as stochastic Galerkin method or the Neumann expansion. However, one of the remaining computational challenges of sampling based methods is how to simultaneously and efficiently solve the large number of linear systems corresponding to each different sample point. In this paper, we present a stochastic model reduction technique that exploits the similarity among the different solves to reduce the computational complexity of subsequent solves. We first suggest how to build a projection matrix such that the statistical moments and/or the coefficients of the projection of the stochastic vector on some orthogonal polynomials are preserved. We further introduce a proximity measure, which we use to determine apriori if a given system needs to be solved, or if it is instead properly represented using the currently available basis. Finally, in order to reduce the time required for the system assembly, we use the multivariate Hermite expansion to represent the system matrix. We verify our method by solving a variety of variation-aware capacitance extraction problems ranging from on-chip capacitance extraction in the presence of width and thickness variations, to off-chip capacitance extraction in the presence of surface roughness. We further solve very large scale problems that cannot be handled by any other state of the art technique.
On Fuzziness, Its Homeland and Its Neighbour
1.105
0.036667
0.0275
0.01375
0.00125
0.000455
0.000118
0
0
0
0
0
0
0
PRIMA: passive reduced-order interconnect macromodeling algorithm This paper describes PRIMA, an algorithm for generating provably passive reduced order N-port models for RLC interconnect circuits. It is demonstrated that, in addition to requiring macromodel stability, macromodel passivity is needed to guarantee the overall circuit stability once the active and passive driver/load models are connected. PRIMA extends the block Arnoldi technique to include guaranteed passivity. Moreover, it is empirically observed that the accuracy is superior to existing block Arnoldi methods. While the same passivity extension is not possible for MPVL, we observed comparable accuracy in the frequency domain for all examples considered. Additionally a path tracing algorithm is used to calculate the reduced order macromodel with the utmost efficiency for generalized RLC interconnects.
Speeding up Monte-Carlo Simulation for Statistical Timing Analysis of Digital Integrated Circuits This paper presents a pair of novel techniques to speed-up path-based Monte-Carlo simulation for statistical timing analysis of digital integrated circuits with no loss of accuracy. The presented techniques can be used in isolation or they could be used together. Both techniques can be readily implemented in any statistical timing framework. We compare our proposed Monte-Carlo simulation with traditional Monte-Carlo simulation in a rigourous framework and show that the new method is up to 2 times as efficient as the traditional method.
Geometrically parameterized interconnect performance models for interconnect synthesis In this paper we describe an approach for generating geometrically-parameterized integrated-circuit interconnect models that are efficient enough for use in interconnect synthesis. The model generation approach presented is automatic, and is based on a multi-parameter model-reduction algorithm. The effectiveness of the technique is tested using a multi-line bus example, where both wire spacing and wire width are considered as geometric parameters. Experimental results demonstrate that the generated models accurately predict both delay and cross-talk effects over a wide range of spacing and width variation.
Stable and efficient reduction of large, multiport RC networks by pole analysis via congruence transformations A novel technique is presented which employs Pole Analysis via Congruence Transformations (PACT) to reduce RC networks in a well-conditioned manner. Pole analysis is shown to be more efficient than Padé approximations when the number of network ports is large, and congruence transforma- tions preserve the passivity (and thus absolute stability) of the networks. Networks are represented by admittance matrices throughout the analysis, and this representation simplifies interfacing the reduced networks with circuit simulators as well as facilitates realization of the reduced networks using RC elements. A prototype SPICE-in, SPICE-out, network reduc- tion CAD tool called RCFIT is detailed, and examples are pre- sented which demonstrate the accuracy and efficiency of the PACT algorithm. 1. INTRODUCTION The trends in industry are to design CMOS VLSI circuits with smaller devices, higher clock speeds, lower power consumption, and more integration of analog and digital circuits; and these increase the importance of modeling layout-dependant parasitics. Resistance and capacitance of interconnect lines can delay trans- mitted signals. Supply line resistance and capacitance, in combina- tion with package inductance, can lead to large variations of the supply voltage during digital switching and degrade circuit perfor- mance. In mixed-signal designs, the current injected into the sub- strate beneath digital devices may create significant noise in analog components through fluctuations of the local substrate volt- age. In order for designers to accurately assess on-chip layout- dependent parasitics before fabrication, macromodels are extracted from a layout and included in the netlist used for circuit simulation. Very often, these effects are modeled solely with
A block rational Arnoldi algorithm for multipoint passive model-order reduction of multiport RLC networks Work in the area of model-order reduction for RLC interconnect networks has focused on building reduced-order models that preserve the circuit-theoretic properties of the network, such as stability, passivity, and synthesizability (Silveira et al., 1996). Passivity is the one circuit-theoretic property that is vital for the successful simulation of a large circuit netlist containing reduced-order models of its interconnect networks. Non-passive reduced-order models may lead to instabilities even if they are themselves stable. We address the problem of guaranteeing the accuracy and passivity of reduced-order models of multiport RLC networks at any finite number of expansion points. The novel passivity-preserving model-order reduction scheme is a block version of the rational Arnoldi algorithm (Ruhe, 1994). The scheme reduces to that of (Odabasioglu et al., 1997) when applied to a single expansion point at zero frequency. Although the treatment of this paper is restricted to expansion points that are on the negative real axis, it is shown that the resulting passive reduced-order model is superior in accuracy to the one that would result from expanding the original model around a single point. Nyquist plots are used to illustrate both the passivity and the accuracy of the reduced order models.
Projection-based approaches for model reduction of weakly nonlinear, time-varying systems The problem of automated macromodel generation is interesting from the viewpoint of system-level design because if small, accurate reduced-order models of system component blocks can be extracted, then much larger portions of a design, or more complicated systems, can be simulated or verified than if the analysis were to have to proceed at a detailed level. The prospect of generating the reduced model from a detailed analysis of component blocks is attractive because then the influence of second-order device effects or parasitic components on the overall system performance can be assessed. In this way overly conservative design specifications can be avoided. This paper reports on experiences with extending model reduction techniques to nonlinear systems of differential-algebraic equations, specifically, systems representative of RF circuit components. The discussion proceeds from linear time-varying, to weakly nonlinear, to nonlinear time-varying analysis, relying generally on perturbational techniques to handle deviations from the linear time-invariant case. The main intent is to explore which perturbational techniques work, which do not, and outline some problems that remain to be solved in developing robust, general nonlinear reduction methods.
Identification of PARAFAC-Volterra cubic models using an Alternating Recursive Least Squares algorithm A broad class of nonlinear systems can be modelled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filters structure. This paper is concerned with the problem of identification of third-order Volterra kernels. A tensorial decomposition called PARAFAC is used to represent such a kernel. A new algorithm called the Alternating Recursive Least Squares (ARLS) algorithm is applied to identify this decomposition for estimating the Volterra kernels of cubic systems. This method significantly reduces the computational complexity of Volterra kernel estimation. Simulation results show the ability of the proposed method to achieve a good identification and an important complexity reduction, i.e. representation of Volterra cubic kernels with few parameters.
Impact of interconnect variations on the clock skew of a gigahertz microprocessor Due to the large die sizes and tight relative clock skew margins, the impact of interconnect manufacturing variations on the clock skew in today's gigahertz microprocessors can no longer be ignored. Unlike manufacturing variations in the devices, the impact of the interconnect manufacturing variations on IC timing performance cannot be captured by worst/best case corner point methods. Thus it is difficult to estimate the clock skew variability due to interconnect variations. In this paper we analyze the timing impact of several key statistically independent interconnect variations in a context-dependent manner by applying a previously reported interconnect variational order-reduction technique. The results show that the interconnect variations can cause up to 25% clock skew variability in a modern microprocessor design.
Parameter and State Model Reduction for Large-Scale Statistical Inverse Problems A greedy algorithm for the construction of a reduced model with reduction in both parameter and state is developed for an efficient solution of statistical inverse problems governed by partial differential equations with distributed parameters. Large-scale models are too costly to evaluate repeatedly, as is required in the statistical setting. Furthermore, these models often have high-dimensional parametric input spaces, which compounds the difficulty of effectively exploring the uncertainty space. We simultaneously address both challenges by constructing a projection-based reduced model that accepts low-dimensional parameter inputs and whose model evaluations are inexpensive. The associated parameter and state bases are obtained through a greedy procedure that targets the governing equations, model outputs, and prior information. The methodology and results are presented for groundwater inverse problems in one and two dimensions.
PiCAP: A parallel and incremental capacitance extraction considering stochastic process variation It is unknown how to include stochastic process variation into fast multipole method (FMM) for a full chip capacitance extraction. This paper presents a parallel FMM extraction using stochastic polynomial expanded geometrical moments. It utilizes multiprocessors to evaluate in parallel for the stochastic potential interaction and its matrix vector product (MVP) with charge. Moreover, a generalized minimal residual (GMRES) method with deflation is modified to incrementally consider the nominal value and the variance. The overall extraction flow is called piCAP. Experiments show that the parallel MVP in piCAP is up to 3X faster than the serial MVP, and the incremental GMRES in pi-CAP is up to 15X faster than non-incremental GMRES methods.
Linearized Bregman iterations for compressed sensing Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One such application is compressed sensing, where an efficient and robust-to-noise algorithm to find a minimal l(1) norm solution is needed. This means that the algorithm should be tailored for large scale and completely dense matrices A, while An and A(T)U can be computed by fast transforms and the solution we seek is sparse. Recently, a simple and fast algorithm based on linearized Bregman iteration was proposed in [28, 32] for this purpose. This paper is to analyze the convergence of linearized Bregman iterations and the minimization properties of their limit. Based on our analysis here, we derive also a new algorithm that is proven to be convergent with a rate. Furthermore, the new algorithm is simple and fast in approximating a minimal l(1) norm solution of An = f as shown by numerical simulations. Hence, it can be used as another choice of an efficient tool in compressed sensing.
The problem of linguistic approximation in clinical decision making This paper deals with the problem of linguistic approximation in a computerized system the context of medical decision making. The general problem and a few application-oriented solutions have been treated in the literature. After a review of the main approaches (best fit, successive approximations, piecewise decomposition, preference set, fuzzy chopping) some of the unresolved problems are pointed out. The case of deciding upon various diagnostic abnormalities suggested by the analysis of the electrocardiographic signal is then put forward. The linguistic approximation method used in this situation is finally described. Its main merit is its simple (i.e., easily understood) linguistic output, which uses labels whose meaning is rather well established among the users (i.e., the physicians).
FPGA design for timing yield under process variations Yield loss due to timing failures results in diminished returns for field-programmable gate arrays (FPGAs), and is aggravated under increased process variations in scaled technologies. The uncertainty in the critical delay of a circuit under process variations exists because the delay of each logic element in the circuit is no longer deterministic. Traditionally, FPGAs have been designed to manage process variations through speed binning, which works well for inter-die variations, but not for intra-die variations resulting in reduced timing yield for FPGAs. FPGAs present a unique challenge because of their programmability and unknown end user application. In this paper, a novel architecture and computer-aided design co-design technique is proposed to improve the timing yield. Experimental results indicate that the use of proposed design technique can achieve timing yield improvement of up to 68%.
Fuzzy control of technological processes in APL2 A fuzzy control system has been developed to solve problems which are difficult or impossible to control with a proportional integral differential approach. According to system constraints, the fuzzy controller changes the importance of the rules and offers suitable variable values. The fuzzy controller testbed consists of simulator code to simulate the process dynamics of a production and distribution system and the fuzzy controller itself. The results of our tests confirm that this approach successfully reflects the experience gained from skilled manual operations. The simulation and control software was developed in APL2/2 running under OS/2. Several features of this product, especially multitasking, the ability to run AP124 and AP207 windows concurrently, and the ability to run concurrent APL2 sessions and interchange data among them were used extensively in the simulation process.
1.003854
0.005926
0.004246
0.003765
0.003603
0.003175
0.002798
0.001643
0.000577
0.000068
0.000001
0
0
0
Image Filtering, Edge Detection, and Edge Tracing Using Fuzzy Reasoning We characterize the problem of detecting edges in images as a fuzzy reasoning problem. The edge detection problem is divided into three stages: filtering, detection, and tracing. Images are filtered by applying fuzzy reasoning based on local pixel characteristics to control the degree of Gaussian smoothing. Filtered images are then subjected to a simple edge detection algorithm which evaluates the edge fuzzy membership value for each pixel, based on local image characteristics. Finally, pixels having high edge membership are traced and assembled into structures, again using fuzzy reasoning to guide the tracing process. The filtering, detection, and tracing algorithms are tested on several test images. Comparison is made with a standard edge detection technique.
Scale-Space and Edge Detection Using Anisotropic Diffusion A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
Regular Expressions for Linear Sequential Circuits
Hedges: A study in meaning criteria and the logic of fuzzy concepts
A framework for accounting for process model uncertainty in statistical static timing analysis In recent years, a large body of statistical static timing analysis and statistical circuit optimization techniques have emerged, providing important avenues to account for the increasing process variations in design. The realization of these statistical methods often demands the availability of statistical process variation models whose accuracy, however, is severely hampered by limitations in test structure design, test time and various sources of inaccuracy inevitably incurred in process characterization. Consequently, it is desired that statistical circuit analysis and optimization can be conducted based upon imprecise statistical variation models. In this paper, we present an efficient importance sampling based optimization framework that can translate the uncertainty in the process models to the uncertainty in parametric yield, thus offering the very much desired statistical best/worst-case circuit analysis capability accounting for unavoidable complexity in process characterization. Unlike the previously proposed statistical learning and probabilistic interval based techniques, our new technique efficiently computes tight bounds of the parametric circuit yields based upon bounds of statistical process model parameters while fully capturing correlation between various process variations. Furthermore, our new technique provides valuable guidance to process characterization. Examples are included to demonstrate the application of our general analysis framework under the context of statistical static timing analysis.
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with know locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing.
MULTILEVEL QUADRATURE FOR ELLIPTIC PARAMETRIC PARTIAL DIFFERENTIAL EQUATIONS IN CASE OF POLYGONAL APPROXIMATIONS OF CURVED DOMAINS Multilevel quadrature methods for parametric operator equations such as the multilevel (quasi-) Monte Carlo method resemble a sparse tensor product approximation between the spatial variable and the parameter. We employ this fact to reverse the multilevel quadrature method by applying differences of quadrature rules to finite element discretizations of increasing resolution. Besides being algorithmically more efficient if the underlying quadrature rules are nested, this way of performing the sparse tensor product approximation enables the easy use of nonnested and even adaptively refined finite element meshes. We moreover provide a rigorous error and regularity analysis addressing the variational crimes of using polygonal approximations of curved domains and numerical quadrature of the bilinear form. Our results facilitate the construction of efficient multilevel quadrature methods based on deterministic high order quadrature rules for the stochastic parameter. Numerical results in three spatial dimensions are provided to illustrate the approach.
Coding Algorithms for 3DTV—A Survey Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development o- f algorithms.
Is Gauss Quadrature Better than Clenshaw-Curtis? We compare the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw-Curtis. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of $\log((z+1)/(z-1))$ in the complex plane. Gauss quadrature corresponds to Padé approximation at $z=\infty$. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at $z=\infty$ is only half as high, but which is nevertheless equally accurate near $[-1,1]$.
Practical RDF schema reasoning with annotated semantic web data Semantic Web data with annotations is becoming available, being YAGO knowledge base a prominent example. In this paper we present an approach to perform the closure of large RDF Schema annotated semantic web data using standard database technology. In particular, we exploit several alternatives to address the problem of computing transitive closure with real fuzzy semantic data extracted from YAGO in the PostgreSQL database management system. We benchmark the several alternatives and compare to classical RDF Schema reasoning, providing the first implementation of annotated RDF schema in persistent storage.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.105263
0.006175
0.000079
0.000008
0.000008
0.000001
0
0
0
0
0
0
0
0
On the equivalence of dynamically orthogonal and bi-orthogonal methods: Theory and numerical simulations. The Karhunen–Lòeve (KL) decomposition provides a low-dimensional representation for random fields as it is optimal in the mean square sense. Although for many stochastic systems of practical interest, described by stochastic partial differential equations (SPDEs), solutions possess this low-dimensional character, they also have a strongly time-dependent form and to this end a fixed-in-time basis may not describe the solution in an efficient way. Motivated by this limitation of standard KL expansion, Sapsis and Lermusiaux (2009) [26] developed the dynamically orthogonal (DO) field equations which allow for the simultaneous evolution of both the spatial basis where uncertainty ‘lives’ but also the stochastic characteristics of uncertainty. Recently, Cheng et al. (2013) [28] introduced an alternative approach, the bi-orthogonal (BO) method, which performs the exact same tasks, i.e. it evolves the spatial basis and the stochastic characteristics of uncertainty. In the current work we examine the relation of the two approaches and we prove theoretically and illustrate numerically their equivalence, in the sense that one method is an exact reformulation of the other. We show this by deriving a linear and invertible transformation matrix described by a matrix differential equation that connects the BO and the DO solutions. We also examine a pathology of the BO equations that occurs when two eigenvalues of the solution cross, resulting in an instantaneous, infinite-speed, internal rotation of the computed spatial basis. We demonstrate that despite the instantaneous duration of the singularity this has important implications on the numerical performance of the BO approach. On the other hand, it is observed that the BO is more stable in nonlinear problems involving a relatively large number of modes. Several examples, linear and nonlinear, are presented to illustrate the DO and BO methods as well as their equivalence.
Numerical schemes for dynamically orthogonal equations of stochastic fluid and ocean flows The quantification of uncertainties is critical when systems are nonlinear and have uncertain terms in their governing equations or are constrained by limited knowledge of initial and boundary conditions. Such situations are common in multiscale, intermittent and non-homogeneous fluid and ocean flows. The dynamically orthogonal (DO) field equations provide an adaptive methodology to predict the probability density functions of such flows. The present work derives efficient computational schemes for the DO methodology applied to unsteady stochastic Navier-Stokes and Boussinesq equations, and illustrates and studies the numerical aspects of these schemes. Semi-implicit projection methods are developed for the mean and for the DO modes, and time-marching schemes of first to fourth order are used for the stochastic coefficients. Conservative second-order finite-volumes are employed in physical space with new advection schemes based on total variation diminishing methods. Other results include: (i) the definition of pseudo-stochastic pressures to obtain a number of pressure equations that is linear in the subspace size instead of quadratic; (ii) symmetric advection schemes for the stochastic velocities; (iii) the use of generalized inversion to deal with singular subspace covariances or deterministic modes; and (iv) schemes to maintain orthonormal modes at the numerical level. To verify our implementation and study the properties of our schemes and their variations, a set of stochastic flow benchmarks are defined including asymmetric Dirac and symmetric lock-exchange flows, lid-driven cavity flows, and flows past objects in a confined channel. Different Reynolds number and Grashof number regimes are employed to illustrate robustness. Optimal convergence under both time and space refinements is shown as well as the convergence of the probability density functions with the number of stochastic realizations.
A dynamically bi-orthogonal method for time-dependent stochastic partial differential equations II: Adaptivity and generalizations This is part II of our paper in which we propose and develop a dynamically bi-orthogonal method (DyBO) to study a class of time-dependent stochastic partial differential equations (SPDEs) whose solutions enjoy a low-dimensional structure. In part I of our paper [9], we derived the DyBO formulation and proposed numerical algorithms based on this formulation. Some important theoretical results regarding consistency and bi-orthogonality preservation were also established in the first part along with a range of numerical examples to illustrate the effectiveness of the DyBO method. In this paper, we focus on the computational complexity analysis and develop an effective adaptivity strategy to add or remove modes dynamically. Our complexity analysis shows that the ratio of computational complexities between the DyBO method and a generalized polynomial chaos method (gPC) is roughly of order O((m/N"p)^3) for a quadratic nonlinear SPDE, where m is the number of mode pairs used in the DyBO method and N"p is the number of elements in the polynomial basis in gPC. The effective dimensions of the stochastic solutions have been found to be small in many applications, so we can expect m is much smaller than N"p and computational savings of our DyBO method against gPC are dramatic. The adaptive strategy plays an essential role for the DyBO method to be effective in solving some challenging problems. Another important contribution of this paper is the generalization of the DyBO formulation for a system of time-dependent SPDEs. Several numerical examples are provided to demonstrate the effectiveness of our method, including the Navier-Stokes equations and the Boussinesq approximation with Brownian forcing.
A dynamically bi-orthogonal method for time-dependent stochastic partial differential equations I: Derivation and algorithms We propose a dynamically bi-orthogonal method (DyBO) to solve time dependent stochastic partial differential equations (SPDEs). The objective of our method is to exploit some intrinsic sparse structure in the stochastic solution by constructing the sparsest representation of the stochastic solution via a bi-orthogonal basis. It is well-known that the Karhunen-Loeve expansion (KLE) minimizes the total mean squared error and gives the sparsest representation of stochastic solutions. However, the computation of the KL expansion could be quite expensive since we need to form a covariance matrix and solve a large-scale eigenvalue problem. The main contribution of this paper is that we derive an equivalent system that governs the evolution of the spatial and stochastic basis in the KL expansion. Unlike other reduced model methods, our method constructs the reduced basis on-the-fly without the need to form the covariance matrix or to compute its eigendecomposition. In the first part of our paper, we introduce the derivation of the dynamically bi-orthogonal formulation for SPDEs, discuss several theoretical issues, such as the dynamic bi-orthogonality preservation and some preliminary error analysis of the DyBO method. We also give some numerical implementation details of the DyBO methods, including the representation of stochastic basis and techniques to deal with eigenvalue crossing. In the second part of our paper [11], we will present an adaptive strategy to dynamically remove or add modes, perform a detailed complexity analysis, and discuss various generalizations of this approach. An extensive range of numerical experiments will be provided in both parts to demonstrate the effectiveness of the DyBO method.
Time-dependent generalized polynomial chaos Generalized polynomial chaos (gPC) has non-uniform convergence and tends to break down for long-time integration. The reason is that the probability density distribution (PDF) of the solution evolves as a function of time. The set of orthogonal polynomials associated with the initial distribution will therefore not be optimal at later times, thus causing the reduced efficiency of the method for long-time integration. Adaptation of the set of orthogonal polynomials with respect to the changing PDF removes the error with respect to long-time integration. In this method new stochastic variables and orthogonal polynomials are constructed as time progresses. In the new stochastic variable the solution can be represented exactly by linear functions. This allows the method to use only low order polynomial approximations with high accuracy. The method is illustrated with a simple decay model for which an analytic solution is available and subsequently applied to the three mode Kraichnan-Orszag problem with favorable results.
A stochastic Galerkin method for the Euler equations with Roe variable transformation The Euler equations subject to uncertainty in the initial and boundary conditions are investigated via the stochastic Galerkin approach. We present a new fully intrusive method based on a variable transformation of the continuous equations. Roe variables are employed to get quadratic dependence in the flux function and a well-defined Roe average matrix that can be determined without matrix inversion.In previous formulations based on generalized polynomial chaos expansion of the physical variables, the need to introduce stochastic expansions of inverse quantities, or square roots of stochastic quantities of interest, adds to the number of possible different ways to approximate the original stochastic problem. We present a method where the square roots occur in the choice of variables, resulting in an unambiguous problem formulation.The Roe formulation saves computational cost compared to the formulation based on expansion of conservative variables. Moreover, the Roe formulation is more robust and can handle cases of supersonic flow, for which the conservative variable formulation fails to produce a bounded solution. For certain stochastic basis functions, the proposed method can be made more effective and well-conditioned. This leads to increased robustness for both choices of variables. We use a multi-wavelet basis that can be chosen to include a large number of resolution levels to handle more extreme cases (e.g. strong discontinuities) in a robust way. For smooth cases, the order of the polynomial representation can be increased for increased accuracy.
Stochastic Galerkin Matrices We investigate the structural, spectral, and sparsity properties of Stochastic Galerkin matrices as they arise in the discretization of linear differential equations with random coefficient functions. These matrices are characterized as the Galerkin representation of polynomial multiplication operators. In particular, it is shown that the global Galerkin matrix associated with complete polynomials cannot be diagonalized in the stochastically linear case.
Stochastic Solutions for the Two-Dimensional Advection-Diffusion Equation In this paper, we solve the two-dimensional advection-diffusion equation with random transport velocity. The generalized polynomial chaos expansion is employed to discretize the equation in random space while the spectral hp element method is used for spatial discretization. Numerical results which demonstrate the convergence of generalized polynomial chaos are presented. Specifically, it appears that the fast convergence rate in the variance is the same as that of the mean solution in the Jacobi-chaos unlike the Hermite-chaos. To this end, a new model to represent compact Gaussian distributions is also proposed.
A training algorithm for optimal margin classifiers A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
Informative Sensing Compressed sensing is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given the distribution of signals we wish to measure, what are the optimal set of linear projections for compressed sensing? We consider the problem of finding a small number of l inear projections that are maximally informative about the signal. Formally, we use the InfoMax criterion and seek to maximize the mutual information between the signal, x, and the (possibly noisy) projection y = Wx. We show that in general the optimal projections are not the principal components of the data nor random projections, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the knowledge of distribution. We present analytic solutions for certain special cases including natural images. In particular, for natural images, the near-optimal projec tions are bandwise random, i.e., incoherent to the sparse bases at a particular frequency band but with more weights on the low-frequencies, which has a physical relation to the multi-resolution representatio n of images.
An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.
Joint Design-Time and Post-Silicon Minimization of Parametric Yield Loss using Adjustable Robust Optimization Parametric yield loss due to variability can be effectively reduced by both design-time optimization strategies and by adjusting circuit parameters to the realizations of variable parameters. The two levels of tuning operate within a single variability budget, and because their effectiveness depends on the magnitude and the spatial structure of variability their joint co-optimization is required. In this paper we develop a formal optimization algorithm for such co-optimization and link it to the control and measurement overhead via the formal notions of measurement and control complexity. We describe an optimization strategy that unifies design-time gate-level sizing and post-silicon adaptation using adaptive body bias at the chip level. The statistical formulation utilizes adjustable robust linear programming to derive the optimal policy for assigning body bias once the uncertain variables, such as gate length and threshold voltage, are known. Computational tractability is achieved by restricting optimal body bias selection policy to be an affine function of uncertain variables. We demonstrate good run-time and show that 5-35% savings in leakage power across the benchmark circuits are possible. Dependence of results on measurement and control complexity is studied and points of diminishing returns for both metrics are identified
An efficient method for chip-level statistical capacitance extraction considering process variations with spatial correlation An efficient method is proposed to consider the process variations with spatial correlation, for chip-level capacitance extraction based on the window technique. In each window, an efficient technique of Hermite polynomial collocation (HPC) is presented to extract the statistical capacitance. The capacitance covariances between windows are then calculated to reflect the spatial correlation. The proposed method is practical for chip-level extraction task, and the experiments on full-path extraction exhibit its high accuracy and efficiency.
On Fuzziness, Its Homeland and Its Neighbour
1.105
0.036667
0.035
0.019091
0.005238
0.00125
0.000133
0.000011
0
0
0
0
0
0
Fuzzy time series prediction method based on fuzzy recurrent neural network One of the frequently used forecasting methods is the time series analysis. Time series analysis is based on the idea that past data can be used to predict the future data. Past data may contain imprecise and incomplete information coming from rapidly changing environment. Also the decisions made by the experts are subjective and rest on their individual competence. Therefore, it is more appropriate for the data to be presented by fuzzy numbers instead of crisp numbers. A weakness of traditional crisp time series forecasting methods is that they process only measurement based numerical information and cannot deal with the perception-based historical data represented by fuzzy numbers. Application of a fuzzy time series whose values are linguistic values, can overcome the mentioned weakness of traditional forecasting methods. In this paper we propose a fuzzy recurrent neural network (FRNN) based fuzzy time series forecasting method using genetic algorithm. The effectiveness of the proposed fuzzy time series forecasting method is tested on benchmark examples.
Which logic is the real fuzzy logic? This paper is a contribution to the discussion of the problem, whether there is a fuzzy logic that can be considered as the real fuzzy logic. We give reasons for taking IMTL, BL, L@P and Ev"L (fuzzy logic with evaluated syntax) as those fuzzy logics that should be indeed taken as the real fuzzy logics.
Fuzzy control as a fuzzy deduction system An approach to fuzzy control based on fuzzy logic in narrow sense (fuzzy inference rules + fuzzy set of logical axioms) is proposed. This gives an interesting theoretical framework and suggests new tools for fuzzy control.
Quantitative fuzzy semantics. The point of departure in this paper is the definition of a language, L, as a fuzzy relation from a set of terms, T = x, to a universe of discourse, U = y. As a fuzzy relation, L is characterized by its membership function @m"L:T x U - [0,1], which associates with each ordered pair (x,y) its grade of membership, @m"L(x,y), in L. Given a particular x in T, the membership function @m"L(x,y) defines a fuzzy set, M(x), in U whose membership function is given by @m"M"("x")(y) = @m"L(x,y). The fuzzy set M(x) is defined to be the meaning of the term x, with x playing the role of a name for M(x). If a term x in T is a concatenation of other terms in T, that is, x = x"1 ... x"n, x"i @e T, i = 1,...,n, then the meaning of x can be expressed in terms of the meanings of x"1,...,x"n through the use of a lambda-expression or by solving a system of equations in the membership functions of the x"i which are deduced from the syntax tree of x. The use of this approach is illustrated by examples.
Outline of a New Approach to the Analysis of Complex Systems and Decision Processes The approach described in this paper represents a substantive departure from the conventional quantitative techniques of system analysis. It has three main distinguishing features: 1) use of so-called ``linguistic'' variables in place of or in addition to numerical variables; 2) characterization of simple relations between variables by fuzzy conditional statements; and 3) characterization of complex relations by fuzzy algorithms. A linguistic variable is defined as a variable whose values are sentences in a natural or artificial language. Thus, if tall, not tall, very tall, very very tall, etc. are values of height, then height is a linguistic variable. Fuzzy conditional statements are expressions of the form IF A THEN B, where A and B have fuzzy meaning, e.g., IF x is small THEN y is large, where small and large are viewed as labels of fuzzy sets. A fuzzy algorithm is an ordered sequence of instructions which may contain fuzzy assignment and conditional statements, e.g., x = very small, IF x is small THEN Y is large. The execution of such instructions is governed by the compositional rule of inference and the rule of the preponderant alternative. By relying on the use of linguistic variables and fuzzy algorithms, the approach provides an approximate and yet effective means of describing the behavior of systems which are too complex or too ill-defined to admit of precise mathematical analysis.
Schopenhauer's Prolegomenon to Fuzziness “Prolegomenon” means something said in advance of something else. In this study, we posit that part of the work by Arthur Schopenhauer (1788–1860) can be thought of as a prolegomenon to the existing concept of “fuzziness.” His epistemic framework offers a comprehensive and surprisingly modern framework to study individual decision making and suggests a bridgeway from the Kantian program into the concept of fuzziness, which may have had its second prolegomenon in the work by Frege, Russell, Wittgenstein, Peirce and Black. In this context, Zadeh's seminal contribution can be regarded as the logical consequence of the Kant-Schopenhauer representation framework.
The fuzzy hyperbolic inequality index associated with fuzzy random variables The aim of this paper is focussed on the quantification of the extent of the inequality associated with fuzzy-valued random variables in general populations. For this purpose, the fuzzy hyperbolic inequality index associated with general fuzzy random variables is presented and a detailed discussion of some of the most valuable properties of this index (extending those for classical inequality indices) is given. Two examples illustrating the computation of the fuzzy inequality index are also considered. Some comments and suggestions are finally included.
Adaptive noise cancellation using type-2 fuzzy logic and neural networks. We describe in this paper the use of type-2 fuzzy logic for achieving adaptive noise cancellation. The objective of adaptive noise cancellation is to filter out an interference component by identifying a model between a measurable noise source and the corresponding un-measurable interference. We propose the use of type-2 fuzzy logic to find this model. The use of type-2 fuzzy logic is justified due to the high level of uncertainty of the process, which makes difficult to find appropriate parameter values for the membership functions.
Note on interval-valued fuzzy set In this note, we introduce the concept of cut set of interval-valued fuzzy set and discuss some properties of cut set of interval-valued fuzzy set, propose three decomposition theorems of interval-valued fuzzy set and investigate some properties of cut set of interval-valued fuzzy set and mapping H in detail. These works can be used in setting up the basic theory of interval-valued fuzzy set.
The collapsing method of defuzzification for discretised interval type-2 fuzzy sets This paper proposes a new approach for defuzzification of interval type-2 fuzzy sets. The collapsing method converts an interval type-2 fuzzy set into a type-1 representative embedded set (RES), whose defuzzified values closely approximates that of the type-2 set. As a type-1 set, the RES can then be defuzzified straightforwardly. The novel representative embedded set approximation (RESA), to which the method is inextricably linked, is expounded, stated and proved within this paper. It is presented in two forms: Simple RESA: this approximation deals with the most simple interval FOU, in which a vertical slice is discretised into 2 points. Interval RESA: this approximation concerns the case in which a vertical slice is discretised into 2 or more points. The collapsing method (simple RESA version) was tested for accuracy and speed, with excellent results on both criteria. The collapsing method proved more accurate than the Karnik-Mendel iterative procedure (KMIP) for an asymmetric test set. For both a symmetric and an asymmetric test set, the collapsing method outperformed the KMIP in relation to speed.
An interactive method for multiple criteria group decision analysis based on interval type-2 fuzzy sets and its application to medical decision making The theory of interval type-2 fuzzy sets provides an intuitive and computationally feasible way of addressing uncertain and ambiguous information in decision-making fields. The aim of this paper is to develop an interactive method for handling multiple criteria group decision-making problems, in which information about criterion weights is incompletely (imprecisely or partially) known and the criterion values are expressed as interval type-2 trapezoidal fuzzy numbers. With respect to the relative importance of multiple decision-makers and group consensus of fuzzy opinions, a hybrid averaging approach combining weighted averages and ordered weighted averages was employed to construct the collective decision matrix. An integrated programming model was then established based on the concept of signed distance-based closeness coefficients to determine the importance weights of criteria and the priority ranking of alternatives. Subsequently, an interactive procedure was proposed to modify the model according to the decision-makers' feedback on the degree of satisfaction toward undesirable solution results for the sake of gradually improving the integrated model. The feasibility and applicability of the proposed methods are illustrated with a medical decision-making problem of patient-centered medicine concerning basilar artery occlusion. A comparative analysis with other approaches was performed to validate the effectiveness of the proposed methodology.
Sparse fusion frames: existence and construction Fusion frame theory is an emerging mathematical theory that provides a natural framework for performing hierarchical data processing. A fusion frame can be regarded as a frame-like collection of subspaces in a Hilbert space, and thereby generalizes the concept of a frame for signal representation. However, when the signal and/or subspace dimensions are large, the decomposition of the signal into its fusion frame measurements through subspace projections typically requires a large number of additions and multiplications, and this makes the decomposition intractable in applications with limited computing budget. To address this problem, in this paper, we introduce the notion of a sparse fusion frame, that is, a fusion frame whose subspaces are generated by orthonormal basis vectors that are sparse in a `uniform basis' over all subspaces, thereby enabling low-complexity fusion frame decompositions. We study the existence and construction of sparse fusion frames, but our focus is on developing simple algorithmic constructions that can easily be adopted in practice to produce sparse fusion frames with desired (given) operators. By a desired (or given) operator we simply mean one that has a desired (or given) set of eigenvalues for the fusion frame operator. We start by presenting a complete characterization of Parseval fusion frames in terms of the existence of special isometries defined on an encompassing Hilbert space. We then introduce two general methodologies to generate new fusion frames from existing ones, namely the Spatial Complement Method and the Naimark Complement Method, and analyze the relationship between the parameters of the original and the new fusion frame. We proceed by establishing existence conditions for 2-sparse fusion frames for any given fusion frame operator, for which the eigenvalues are greater than or equal to two. We then provide an easily implementable algorithm for computing such 2-sparse fusion frames.
Low-dimensional signal-strength fingerprint-based positioning in wireless LANs Accurate location awareness is of paramount importance in most ubiquitous and pervasive computing applications. Numerous solutions for indoor localization based on IEEE802.11, bluetooth, ultrasonic and vision technologies have been proposed. This paper introduces a suite of novel indoor positioning techniques utilizing signal-strength (SS) fingerprints collected from access points (APs). Our first approach employs a statistical representation of the received SS measurements by means of a multivariate Gaussian model by considering a discretized grid-like form of the indoor environment and by computing probability distribution signatures at each cell of the grid. At run time, the system compares the signature at the unknown position with the signature of each cell by using the Kullback-Leibler Divergence (KLD) between their corresponding probability densities. Our second approach applies compressive sensing (CS) to perform sparsity-based accurate indoor localization, while reducing significantly the amount of information transmitted from a wireless device, possessing limited power, storage, and processing capabilities, to a central server. The performance evaluation which was conducted at the premises of a research laboratory and an aquarium under real-life conditions, reveals that the proposed statistical fingerprinting and CS-based localization techniques achieve a substantial localization accuracy.
3D visual experience oriented cross-layer optimized scalable texture plus depth based 3D video streaming over wireless networks. •A 3D experience oriented 3D video cross-layer optimization method is proposed.•Networking-related 3D visual experience model for 3D video streaming is presented.•3D video characteristics are fully considered in the cross-layer optimization.•MAC layer channel allocation and physical layer MCS are systematically optimized.•Results show that our method obtains superior 3D visual experience to others.
1.201353
0.100677
0.100677
0.010071
0.001286
0.000384
0.000157
0.000082
0.000034
0.000008
0
0
0
0
Practical Implementation of Stochastic Parameterized Model Order Reduction via Hermite Polynomial Chaos This paper describes the stochastic model order reduction algorithm via stochastic Hermite polynomials from the practical implementation perspective. Comparing with existing work on stochastic interconnect analysis and parameterized model order reduction, we generalized the input variation representation using polynomial chaos (PC) to allow for accurate modeling of non-Gaussian input variations. We also explore the implicit system representation using sub-matrices and improved the efficiency for solving the linear equations utilizing block matrix structure of the augmented system. Experiments show that our algorithm matches with Monte Carlo methods very well while keeping the algorithm effective. And the PC representation of non-Gaussian variables gains more accuracy than Taylor representation used in previous work (Wang et al., 2004).
Sensitivity analysis and model order reduction for random linear dynamical systems We consider linear dynamical systems defined by differential algebraic equations. The associated input-output behaviour is given by a transfer function in the frequency domain. Physical parameters of the dynamical system are replaced by random variables to quantify uncertainties. We analyse the sensitivity of the transfer function with respect to the random variables. Total sensitivity coefficients are computed by a nonintrusive and by an intrusive method based on the expansions in series of the polynomial chaos. In addition, a reduction of the state space is applied in the intrusive method. Due to the sensitivities, we perform a model order reduction within the random space by changing unessential random variables back to constants. The error of this reduction is analysed. We present numerical simulations of a test example modelling a linear electric network.
Statistical Analysis Of Power Grid Networks Considering Lognormal Leakage Current Variations With Spatial Correlation As the technology scales into 90nm and below, process-induced variations become more pronounced. In this paper, we propose an efficient stochastic method for analyzing the voltage drop variations of on-chip power grid networks ' considering log-normal leakage current variations with spatial correlation. The new analysis is based on the Hermite polynomial chaos (PC) representation of random processes. Different from the existing Hermite PC based method for power grid analysis, which models all the random variations as Gaussian processes without considering spatial correlation. The new method focuses on the impacts of stochastic sub-threshold leakage currents, which are modeled as log-normal distribution random variables, on the power grid voltage variations. To consider the spatial correlation, we apply orthogonal decomposition to map the correlated random variables into independent variables. Our experiment results show that the new method is more accurate than the Gaussian-only Hermite PC method using the Taylor expansion method for analyzing leakage current variations, and two orders of magnitude faster than the Monte Carlo method with small variance errors. We also show that the spatial correlation may lead to large errors if not being considered in the statistical analysis.
Stochastic extended Krylov subspace method for variational analysis of on-chip power grid networks In this paper, we propose a novel stochastic method for analyzing the voltage drop variations of on-chip power grid networks with log-normal leakage current variations. The new method, called StoEKS, applies Hermite polynomial chaos (PC) to represent the random variables in both power grid networks and input leakage currents. But different from the existing Hermit PC based stochastic simulation method, extended Krylov subspace method (EKS) is employed to compute variational responses using the augmented matrices consisting of the coefficients of Hermite polynomials. Our contribution lies in the combination of the statistical spectrum method with the extended Krylov subspace method to fast solve the variational circuit equations for the first time. Experimental results show that the proposed method is about two-order magnitude faster than the existing Her-mite PC based simulation method and more order of magnitudes faster than Monte Carlo methods with marginal errors. StoEKS also can analyze much larger circuits than the exiting Hermit PC based methods.
Eigenvalues of the Jacobian of a Galerkin-Projected Uncertain ODE System Projection onto polynomial chaos (PC) basis functions is often used to reformulate a system of ordinary differential equations (ODEs) with uncertain parameters and initial conditions as a deterministic ODE system that describes the evolution of the PC modes. The deterministic Jacobian of this projected system is different and typically much larger than the random Jacobian of the original ODE system. This paper shows that the location of the eigenvalues of the projected Jacobian is largely determined by the eigenvalues of the original Jacobian, regardless of PC order or choice of orthogonal polynomials. Specifically, the eigenvalues of the projected Jacobian always lie in the convex hull of the numerical range of the Jacobian of the original system.
Stochastic Power Grid Analysis Considering Process Variations In this paper, we investigate the impact of interconnect and device process variations on voltage fluctuations in power grids. We consider random variations in the power grid's electrical parameters as spatial stochastic processes and propose a new and efficient method to compute the stochastic voltage response of the power grid. Our approach provides an explicit analytical representation of the stochastic voltage response using orthogonal polynomials in a Hilbert space. The approach has been implemented in a prototype software called OPERA (Orthogonal Polynomial Expansions for Response Analysis). Use of OPERA on industrial power grids demonstrated speed-ups of up to two orders of magnitude. The results also show a significant variation of about 卤 35% in the nominal voltage drops at various nodes of the power grids and demonstrate the need for variation-aware power grid analysis.
Why Quasi-Monte Carlo is Better Than Monte Carlo or Latin Hypercube Sampling for Statistical Circuit Analysis At the nanoscale, no circuit parameters are truly deterministic; most quantities of practical interest present themselves as probability distributions. Thus, Monte Carlo techniques comprise the strategy of choice for statistical circuit analysis. There are many challenges in applying these techniques efficiently: circuit size, nonlinearity, simulation time, and required accuracy often conspire to make Monte Carlo analysis expensive and slow. Are we-the integrated circuit community-alone in facing such problems? As it turns out, the answer is “no.” Problems in computational finance share many of these characteristics: high dimensionality, profound nonlinearity, stringent accuracy requirements, and expensive sample evaluation. We perform a detailed experimental study of how one celebrated technique from that domain-quasi-Monte Carlo (QMC) simulation-can be adapted effectively for fast statistical circuit analysis. In contrast to traditional pseudorandom Monte Carlo sampling, QMC uses a (shorter) sequence of deterministically chosen sample points. We perform rigorous comparisons with both Monte Carlo and Latin hypercube sampling across a set of digital and analog circuits, in 90 and 45 nm technologies, varying in size from 30 to 400 devices. We consistently see superior performance from QMC, giving 2× to 8× speedup over conventional Monte Carlo for roughly 1% accuracy levels. We present rigorous theoretical arguments that support and explain this superior performance of QMC. The arguments also reveal insights regarding the (low) latent dimensionality of these circuit problems; for example, we observe that over half of the variance in our test circuits is from unidimensional behavior. This analysis provides quantitative support for recent enthusiasm in dimensionality reduction of circuit problems.
A Quasi-Convex Optimization Approach to Parameterized Model Order Reduction In this paper, an optimization-based model order reduction (MOR) framework is proposed. The method involves setting up a quasi-convex program that solves a relaxation of the optimal Hinfin norm MOR problem. The method can generate guaranteed stable and passive reduced models and is very flexible in imposing additional constraints such as exact matching of specific frequency response samples. The proposed optimization-based approach is also extended to solve the parameterized model-reduction problem (PMOR). The proposed method is compared to existing moment matching and optimization-based MOR methods in several examples. PMOR models for large RF inductors over substrate and power-distribution grid are also constructed.
Is Gauss Quadrature Better than Clenshaw-Curtis? We compare the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw-Curtis. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of $\log((z+1)/(z-1))$ in the complex plane. Gauss quadrature corresponds to Padé approximation at $z=\infty$. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at $z=\infty$ is only half as high, but which is nevertheless equally accurate near $[-1,1]$.
Synthesizing a representative critical path for post-silicon delay prediction Several approaches to post-silicon adaptation require feedback from a replica of the nominal critical path, whose variations are intended to reflect those of the entire circuit after manufacturing. For realistic circuits, where the number of critical paths can be large, the notion of using a single critical path is too simplistic. This paper overcomes this problem by introducing the idea of synthesizing a representative critical path (RCP), which captures these complexities of the variations. We first prove that the requirement on the RCP is that it should be highly correlated with the circuit delay. Next, we present two novel algorithms to automatically build the RCP. Our experimental results demonstrate that over a number of samples of manufactured circuits, the delay of the RCP captures the worst case delay of the manufactured circuit. The average prediction error of all circuits is shown to be below 2.8% for both approaches. For both our approach and the critical path replica method, it is essential to guard-band the prediction to ensure pessimism: our approach requires a guard band 30% smaller than for the critical path replica method.
Sharp thresholds for high-dimensional and noisy recovery of sparsity The problem of consistently estimating the sparsity pattern of a vector β� 2 Rp based on observa- tions contaminated by noise arises in various contexts, including subset selection in regression, structure estima- tion in graphical models, sparse approximation, and sig- nal denoising. Unfortunately, the natural optimization- theoretic formulation involves ℓ0 constraints, which leads to NP-hard problems in general; this intractability mo- tivates the use of relaxations based on ℓ1 constraints. We analyze the behavior of ℓ1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish a sharp relation between the problem di- mension p, the number s of non-zero elements in β�, and the number of observations n that are required for reliable recovery. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we establish existence and compute explicit values of thresh- olds θℓ and θu with the following properties: for any ν > 0, if n > 2 s(θu + ν) log(p s) + s + 1, then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2 s(θℓ ν) log(p s) + s + 1, then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble, we show that θℓ = θu = 1, so that the threshold is sharp and exactly determined.
A conceptual framework for fuzzy query processing—A step toward very intelligent database systems This paper is concerned with techniques for fuzzy query processing in a database system. By a fuzzy query we mean a query which uses imprecise or fuzzy predicates (e.g. AGE = “VERY YOUNG”, SALARY = “MORE OR LESS HIGH”, YEAR-OF-EMPLOYMENT = “RECENT”, SALARY ⪢ 20,000, etc.). As a basis for fuzzy query processing, a fuzzy retrieval system based on the theory of fuzzy sets and linguistic variables is introduced. In our system model, the first step in processing fuzzy queries consists of assigning meaning to fuzzy terms (linguistic values), of a term-set, used for the formulation of a query. The meaning of a fuzzy term is defined as a fuzzy set in a universe of discourse which contains the numerical values of a domain of a relation in the system database.
Compressed sensing with probabilistic measurements: a group testing solution Detection of defective members of large populations has been widely studied in the statistics community under the name ¿group testing¿, a problem which dates back to World War II when it was suggested for syphilis screening. There, the main interest is to identify a small number of infected people among a large population using collective samples. In viral epidemics, one way to acquire collective samples is by sending agents inside the population. While in classical group testing, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in this work we assume that the decoder possesses only partial knowledge about the sampling process. This assumption is justified by observing the fact that in a viral sickness, there is a chance that an agent remains healthy despite having contact with an infected person. Therefore, the reconstruction method has to cope with two different types of uncertainty; namely, identification of the infected population and the partially unknown sampling procedure. In this work, by using a natural probabilistic model for ¿viral infections¿, we design non-adaptive sampling procedures that allow successful identification of the infected population with overwhelming probability 1 - o(1). We propose both probabilistic and explicit design procedures that require a ¿small¿ number of agents to single out the infected individuals. More precisely, for a contamination probability p, the number of agents required by the probabilistic and explicit designs for identification of up to k infected members is bounded by m = O(k2(log n)/p2) and m = O(k2 (log2 n)/p2), respectively. In both cases, a simple decoder is able to successfully identify the infected population in time O(mn).
A Machine Learning Approach to Personal Pronoun Resolution in Turkish.
1.041172
0.04
0.028415
0.016534
0.0104
0.00505
0.001099
0.000561
0.000256
0.000037
0
0
0
0
On similarity and inclusion measures between type-2 fuzzy sets with an application to clustering In this paper we define similarity and inclusion measures between type-2 fuzzy sets. We then discuss their properties and also consider the relationships between them. Several examples are used to present the calculation of these similarity and inclusion measures between type-2 fuzzy sets. We finally combine the proposed similarity measures with Yang and Shih's [M.S. Yang, H.M. Shih, Cluster analysis based on fuzzy relations, Fuzzy Sets and Systems 120 (2001) 197-212] algorithm as a clustering method for type-2 fuzzy data. These clustering results are compared with Hung and Yang's [W.L. Hung, M.S. Yang, Similarity measures between type-2 fuzzy sets, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 12 (2004) 827-841] results. According to different @a-level, these clustering results consist of a better hierarchical tree.
Hybrid Bayesian estimation tree learning with discrete and fuzzy labels Classical decision tree model is one of the classical machine learning models for its simplicity and effectiveness in applications. However, compared to the DT model, probability estimation trees (PETs) give a better estimation on class probability. In order to get a good probability estimation, we usually need large trees which are not desirable with respect to model transparency. Linguistic decision tree (LDT) is a PET model based on label semantics. Fuzzy labels are used for building the tree and each branch is associated with a probability distribution over classes. If there is no overlap between neighboring fuzzy labels, these fuzzy labels then become discrete labels and a LDT with discrete labels becomes a special case of the PET model. In this paper, two hybrid models by combining the naive Bayes classifier and PETs are proposed in order to build a model with good performance without losing too much transparency. The first model uses naive Bayes estimation given a PET, and the second model uses a set of small-sized PETs as estimators by assuming the independence between these trees. Empirical studies on discrete and fuzzy labels show that the first model outperforms the PET model at shallow depth, and the second model is equivalent to the naive Bayes and PET.
Some new distance measures for type-2 fuzzy sets and distance measure based ranking for group decision making problems In this paper, we propose some distance measures between type-2 fuzzy sets, and also a new family of utmost distance measures are presented. Several properties of different proposed distance measures have been introduced. Also, we have introduced a new ranking method for the ordering of type-2 fuzzy sets based on the proposed distance measure. The proposed ranking method satisfies the reasonable properties for the ordering of fuzzy quantities. Some properties such as robustness, order relation have been presented. Limitations of existing ranking methods have been studied. Further for practical use, a new method for selecting the best alternative, for group decision making problems is proposed. This method is illustrated with a numerical example.
Piecewise-linear approximation of non-linear models based on probabilistically/possibilistically interpreted intervals' numbers (INs) Linear models are preferable due to simplicity. Nevertheless, non-linear models often emerge in practice. A popular approach for modeling nonlinearities is by piecewise-linear approximation. Inspired from fuzzy inference systems (FISs) of Tagaki-Sugeno-Kang (TSK) type as well as from Kohonen's self-organizing map (KSOM) this work introduces a genetically optimized synergy based on intervals' numbers, or INs for short. The latter (INs) are interpreted here either probabilistically or possibilistically. The employment of mathematical lattice theory is instrumental. Advantages include accommodation of granular data, introduction of tunable nonlinearities, and induction of descriptive decision-making knowledge (rules) from the data. Both efficiency and effectiveness are demonstrated in three benchmark problems. The proposed computational method demonstrates invariably a better capacity for generalization; moreover, it learns orders-of-magnitude faster than alternative methods inducing clearly fewer rules.
Relationships between entropy and similarity measure of interval-valued intuitionistic fuzzy sets The concept of entropy of interval-valued intuitionistic fuzzy set (IvIFS) is first introduced. The close relationships between entropy and the similarity measure of interval-valued intuitionistic fuzzy sets are discussed in detail. We also obtain some important theorems by which entropy and similarity measure of IvIFSs can be transformed into each other based on their axiomatic definitions. Simultaneously, some formulae to calculate entropy and similarity measure of IvIFSs are put forward. © 2010 Wiley Periodicals, Inc.
Some information measures for interval-valued intuitionistic fuzzy sets A new information entropy measure of interval-valued intuitionistic fuzzy set (IvIFS) is proposed by using membership interval and non-membership interval of IvIFS, which complies with the extended form of Deluca-Termini axioms for fuzzy entropy. Then the cross-entropy of IvIFSs is presented and the relationship between the proposed entropy measures and the existing information measures of IvIFSs is discussed. Additionally, some numerical examples are given to illustrate the applications of the proposed entropy and cross-entropy of IvIFSs to pattern recognition and decision-making.
Dynamic system modeling using a recurrent interval-valued fuzzy neural network and its hardware implementation This paper first proposes a new recurrent interval-valued fuzzy neural network (RIFNN) for dynamic system modeling. A new hardware implementation technique for the RIFNN using a field-programmable gate array (FPGA) chip is then proposed. The antecedent and consequent parts in an RIFNN use interval-valued fuzzy sets in order to increase the network noise resistance ability. A new recurrent structure is proposed in RIFNN, with the recurrent loops enabling it to handle dynamic system processing problems. An RIFNN is constructed from structure and parameter learning. For hardware implementation of the RIFNN, the pipeline technique and a new circuit for type-reduction operation are proposed to improve the chip performance. Simulations and comparisons with various feedforward and recurrent fuzzy neural networks verify the performance of the RIFNN under noisy conditions.
An Interval Type-2 Fuzzy Logic System To Translate Between Emotion-Related Vocabularies This paper describes a novel experiment that demonstrates the feasiblity of a fuzzy logic (FL) representation of emotion-related words used to translate between different emotional vocabularies. Type-2 fuzzy sets were encoded using input from web-based surveys that prompted users with emotional words and asked them to enter an interval using a double slider. The similarity of the encoded fuzzy sets was computed and it was shown that a reliable [napping can be made between a large vocabulary of emotional words and a smaller vocabulary of words naming seven emotion categories. Though the mapping results are comparable to Euclidian distance in the valence/activation/dominance space, the FL representation has several benefits that are discussed.
Hybrid intelligent systems for time series prediction using neural networks, fuzzy logic, and fractal theory In this paper, we describe a new method for the estimation of the fractal dimension of a geometrical object using fuzzy logic techniques. The fractal dimension is a mathematical concept, which measures the geometrical complexity of an object. The algorithms for estimating the fractal dimension calculate a numerical value using as data a time series for the specific problem. This numerical (crisp) value gives an idea of the complexity of the geometrical object (or time series). However, there is an underlying uncertainty in the estimation of the fractal dimension because we use only a sample of points of the object, and also because the numerical algorithms for the fractal dimension are not completely accurate. For this reason, we have proposed a new definition of the fractal dimension that incorporates the concept of a fuzzy set. This new definition can be considered a weaker definition (but more realistic) of the fractal dimension, and we have named this the "fuzzy fractal dimension." We can apply this new definition of the fractal dimension in conjunction with soft computing techniques for the problem of time series prediction. We have developed hybrid intelligent systems combining neural networks, fuzzy logic, and the fractal dimension, for the problem of time series prediction, and we have achieved very good results.
Construction of interval-valued fuzzy entropy invariant by translations and scalings In this paper, we propose a method to construct interval-valued fuzzy entropies (Burillo and Bustince 1996). This method uses special aggregation functions applied to interval-contrasts. In this way, we are able to construct interval-valued fuzzy entropies from automorphisms and implication operators. Finally, we study the invariance of our constructions by scaling and translation.
Multiplicative consistency of intuitionistic reciprocal preference relations and its application to missing values estimation and consensus building. The mathematical modelling and representation of Tanino’s multiplicative transitivity property to the case of intuitionistic reciprocal preference relations (IRPRs) is derived via Zadeh’s extension principle and the representation theorem of fuzzy sets. This result guarantees the correct generalisation of the multiplicative transitivity property of reciprocal preference relations (RPRs), and it allows the multiplicative consistency (MC) property of IRPRs to be defined. The MC property used in decision making problems is threefold: (1) to develop a consistency based procedure to estimate missing values in IRPRs using an indirect chain of alternatives; (2) to quantify the consistency index (CI) of preferences provided by experts; and (3) to build a novel consistency based induced ordered weighted averaging (MC-IOWA) operator that associates a higher contribution in the aggregated value to the more consistent information. These three uses are implemented in developing a consensus model for GDM problems with incomplete IRPRs in which the level of agreement between the experts’ individual IRPRs and the collective IRPR, which is referred here as the proximity index (PI), is combined with the CI to design a feedback mechanism to support experts to change some of their preference values using simple advice rules that aim at increasing the level of agreement while, at the same time, keeping a high degree of consistency. In the presence of missing information, the feedback mechanism implements the consistency based procedure to produce appropriate estimate values of the missing ones based on the given information provided by the experts. Under the assumption of constant CI values, the feedback mechanism is proved to converge to unanimous consensus when all experts are provided with recommendations and these are fully implemented. Additionally, visual representation of experts’ consensus position within the group before and after implementing their feedback advice is also provided, which help an expert to revisit his evaluations and make changes if considered appropriate to achieve a higher consensus level. Finally, an IRPR fuzzy majority based quantifier-guided non-dominance degree based prioritisation method using the associated score reciprocal preference relation is proposed to obtain the final solution of consensus.
Hallucinating face by position-patch A novel face hallucination method is proposed in this paper for the reconstruction of a high-resolution face image from a low-resolution observation based on a set of high- and low-resolution training image pairs. Different from most of the established methods based on probabilistic or manifold learning models, the proposed method hallucinates the high-resolution image patch using the same position image patches of each training image. The optimal weights of the training image position-patches are estimated and the hallucinated patches are reconstructed using the same weights. The final high-resolution facial image is formed by integrating the hallucinated patches. The necessity of two-step framework or residue compensation and the differences between hallucination based on patch and global image are discussed. Experiments show that the proposed method without residue compensation generates higher-quality images and costs less computational time than some recent face image super-resolution (hallucination) techniques.
Parallel Opportunistic Routing in Wireless Networks We study benefits of opportunistic routing in a large wireless ad hoc network by examining how the power, delay, and total throughput scale as the number of source–destination pairs increases up to the operating maximum. Our opportunistic routing is novel in a sense that it is massively parallel, i.e., it is performed by many nodes simultaneously to maximize the opportunistic gain while controlling the interuser interference. The scaling behavior of conventional multihop transmission that does not employ opportunistic routing is also examined for comparison. Our main results indicate that our opportunistic routing can exhibit a net improvement in overall power–delay tradeoff over the conventional routing by providing up to a logarithmic boost in the scaling law. Such a gain is possible since the receivers can tolerate more interference due to the increased received signal power provided by the multi user diversity gain, which means that having more simultaneous transmissions is possible.
Robust LMIs with polynomial dependence on the uncertainty Solving robust linear matrix inequalities (LMIs) has long been recognized as an important problem in robust control. Although the solution to this problem is well-known for the case of affine dependence on the uncertainty, to the best of our knowledge, results for other types of dependence are limited. In this paper we address the the problem of solving robust LMIs for the case of polynomial dependence on the uncertainty. More precisely, results from numerical integration of polynomial functions are used to develop procedures to minimize the volume of the set of uncertain parameters for which the LMI condition is violated.
1.024129
0.027556
0.026667
0.011111
0.007728
0.002916
0.00132
0.000543
0.000203
0.000054
0.000002
0
0
0
A new approach to knowledge-based design of recurrent neural networks. A major drawback of artificial neural networks (ANNs) is their black-box character. This is especially true for recurrent neural networks (RNNs) because of their intricate feedback connections. In particular, given a problem and some initial information concerning its solution, it is not at all obvious how to design an RNN that is suitable for solving this problem. In this paper, we consider a fuzzy rule base with a special structure, referred to as the fuzzy all-permutations rule base (FARB). Inferring the FARB yields an input-output (IO) mapping that is mathematically equivalent to that of an RNN. We use this equivalence to develop two new knowledge-based design methods for RNNs. The first method, referred to as the direct approach, is based on stating the desired functioning of the RNN in terms of several sets of symbolic rules, each one corresponding to a subnetwork. Each set is then transformed into a suitable FARB. The second method is based on first using the direct approach to design a library of simple modules, such as counters or comparators, and realize them using RNNs. Once designed, the correctness of each RNN can be verified. Then, the initial design problem is solved by using these basic modules as building blocks. This yields a modular and systematic approach for knowledge-based design of RNNs. We demonstrate the efficiency of these approaches by designing RNNs that recognize both regular and nonregular formal languages.
Knowledge Extraction From Neural Networks Using the All-Permutations Fuzzy Rule Base: The LED Display Recognition Problem A major drawback of artificial neural networks (ANNs) is their black-box character. Even when the trained network performs adequately, it is very difficult to understand its operation. In this letter, we use the mathematical equivalence between ANNs and a specific fuzzy rule base to extract the knowledge embedded in the network. We demonstrate this using a benchmark problem: the recognition of digits produced by a light emitting diode (LED) device. The method provides a symbolic and comprehensible description of the knowledge learned by the network during its training.
Extraction of similarity based fuzzy rules from artificial neural networks A method to extract a fuzzy rule based system from a trained artificial neural network for classification is presented. The fuzzy system obtained is equivalent to the corresponding neural network. In the antecedents of the fuzzy rules, it uses the similarity between the input datum and the weight vectors. This implies rules highly understandable. Thus, both the fuzzy system and a simple analysis of the weight vectors are enough to discern the hidden knowledge learnt by the neural network. Several classification problems are presented to illustrate this method of knowledge discovery by using artificial neural networks.
Interpretation of artificial neural networks by means of fuzzy rules This paper presents an extension of the method presented by Benitez et al (1997) for extracting fuzzy rules from an artificial neural network (ANN) that express exactly its behavior. The extraction process provides an interpretation of the ANN in terms of fuzzy rules. The fuzzy rules presented are in accordance with the domain of the input variables. These rules use a new operator in the antecedent. The properties and intuitive meaning of this operator are studied. Next, the role of the biases in the fuzzy rule-based systems is analyzed. Several examples are presented to comment on the obtained fuzzy rule-based systems. Finally, the interpretation of ANNs with two or more hidden layers is also studied
The Vienna Definition Language
Fuzzy algorithms
First-order incremental block-based statistical timing analysis Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.
Tensor rank is NP-complete We prove that computing the rank of a three-dimensional tensor over any finite field is NP-complete. Over the rational numbers the problem is NP-hard.
On multi-granular fuzzy linguistic modeling in group decision making problems: A systematic review and future trends. The multi-granular fuzzy linguistic modeling allows the use of several linguistic term sets in fuzzy linguistic modeling. This is quite useful when the problem involves several people with different knowledge levels since they could describe each item with different precision and they could need more than one linguistic term set. Multi-granular fuzzy linguistic modeling has been frequently used in group decision making field due to its capability of allowing each expert to express his/her preferences using his/her own linguistic term set. The aim of this research is to provide insights about the evolution of multi-granular fuzzy linguistic modeling approaches during the last years and discuss their drawbacks and advantages. A systematic literature review is proposed to achieve this goal. Additionally, some possible approaches that could improve the current multi-granular linguistic methodologies are presented.
Exact and Approximate Sparse Solutions of Underdetermined Linear Equations In this paper, we empirically investigate the NP-hard problem of finding sparsest solutions to linear equation systems, i.e., solutions with as few nonzeros as possible. This problem has recently received considerable interest in the sparse approximation and signal processing literature. We use a branch-and-cut approach via the maximum feasible subsystem problem to compute optimal solutions for small instances and investigate the uniqueness of the optimal solutions. We furthermore discuss six (modifications of) heuristics for this problem that appear in different parts of the literature. For small instances, the exact optimal solutions allow us to evaluate the quality of the heuristics, while for larger instances we compare their relative performance. One outcome is that the so-called basis pursuit heuristic performs worse, compared to the other methods. Among the best heuristics are a method due to Mangasarian and one due to Chinneck.
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an $n$-dimensional vector “decouples” as $n$ scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.249984
0.124992
0.124992
0.04797
0.000026
0.000009
0
0
0
0
0
0
0
0
Uncertainty quantification and apportionment in air quality models using the polynomial chaos method Current air quality models generate deterministic forecasts by assuming perfect model, perfectly known parameters, and exact input data. However, our knowledge of the physics is imperfect. It is of interest to extend the deterministic simulation results with ''error bars'' that quantify the degree of uncertainty, and analyze the impact of the uncertainty input on the simulation results. This added information provides a confidence level for the forecast results. Monte Carlo (MC) method is a popular approach for air quality model uncertainty analysis, but it converges slowly. This work discusses the polynomial chaos (PC) method that is more suitable for uncertainty quantification (UQ) in large-scale models. We propose a new approach for uncertainty apportionment (UA), i.e., we develop a PC approach to attribute the uncertainties in model results to different uncertainty inputs. The UQ and UA techniques are implemented in the Sulfur Transport Eulerian Model (STEM-III). A typical scenario of air pollution in the northeast region of the USA is considered. The UQ and UA results allow us to assess the combined effects of different input uncertainties on the forecast uncertainty. They also enable to quantify the contribution of input uncertainties to the uncertainty in the predicted ozone and PAN concentrations.
An analysis of polynomial chaos approximations for modeling single-fluid-phase flow in porous medium systems We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method.
Karhunen-Loève approximation of random fields by generalized fast multipole methods KL approximation of a possibly instationary random field a(ω, x) ∈ L2(Ω,dP; L∞(D)) subject to prescribed meanfield Ea(x) = ∫Ω, a (ω x) dP(ω) and covariance Va(x,x') = ∫Ω(a(ω, x) - Ea(x))(a(ω, x') - Ea(x')) dP(ω) in a polyhedral domain D ⊂ Rd is analyzed. We show how for stationary covariances Va(x,x') = ga(|x - x'|) with ga(z) analytic outside of z = 0, an M-term approximate KL-expansion aM(ω, x) of a(ω, x) can be computed in log-linear complexity. The approach applies in arbitrary domains D and for nonseparable covariances Ca. It involves Galerkin approximation of the KL eigenvalue problem by discontinuous finite elements of degree p ≥ 0 on a quasiuniform, possibly unstructured mesh of width h in D, plus a generalized fast multipole accelerated Krylov-Eigensolver. The approximate KL-expansion aM(X, ω) of a(x, ω) has accuracy O(exp(-bM1/d)) if ga is analytic at z = 0 and accuracy O(M-k/d) if ga is Ck at zero. It is obtained in O(MN(logN)b) operations where N = O(h-d).
High-Order Collocation Methods for Differential Equations with Random Inputs Recently there has been a growing interest in designing efficient methods for the solution of ordinary/partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling methods. However, when the governing equations take complicated forms, numerical implementations of stochastic Galerkin methods can become nontrivial and care is needed to design robust and efficient solvers for the resulting equations. On the other hand, the traditional sampling methods, e.g., Monte Carlo methods, are straightforward to implement, but they do not offer convergence as fast as stochastic Galerkin methods. In this paper, a high-order stochastic collocation approach is proposed. Similar to stochastic Galerkin methods, the collocation methods take advantage of an assumption of smoothness of the solution in random space to achieve fast convergence. However, the numerical implementation of stochastic collocation is trivial, as it requires only repetitive runs of an existing deterministic solver, similar to Monte Carlo methods. The computational cost of the collocation methods depends on the choice of the collocation points, and we present several feasible constructions. One particular choice, based on sparse grids, depends weakly on the dimensionality of the random space and is more suitable for highly accurate computations of practical applications with large dimensional random inputs. Numerical examples are presented to demonstrate the accuracy and efficiency of the stochastic collocation methods.
A Posteriori Control of Modeling Errors and Discretization Errors We investigate the concept of dual-weighted residuals for measuring model errors in the numerical solution of nonlinear partial differential equations. The method is first derived in the case where only model errors arise and then extended to handle simultaneously model and discretization errors. We next present an adaptive model/mesh refinement procedure where both sources of error are equilibrated. Various test cases involving Poisson equations and convection diffusion-reaction equations with complex diffusion models (oscillating diffusion coefficient, nonlinear diffusion, multicomponent diffusion matrix) confirm the reliability of the analysis and the efficiency of the proposed methodology.
Stochastic approaches to uncertainty quantification in CFD simulations. This paper discusses two stochastic approaches to computing the propagation of uncertainty in numerical simulations: polynomial chaos and stochastic collocation. Chebyshev polynomials are used in both cases for the conventional, deterministic portion of the discretization in physical space. For the stochastic parameters, polynomial chaos utilizes a Galerkin approximation based upon expansions in Hermite polynomials, whereas stochastic collocation rests upon a novel transformation between the stochastic space and an artificial space. In our present implementation of stochastic collocation, Legendre interpolating polynomials are employed. These methods are discussed in the specific context of a quasi-one-dimensional nozzle flow with uncertainty in inlet conditions and nozzle shape. It is shown that both stochastic approaches efficiently handle uncertainty propagation. Furthermore, these approaches enable computation of statistical moments of arbitrary order in a much more effective way than other usual techniques such as the Monte Carlo simulation or perturbation methods. The numerical results indicate that the stochastic collocation method is substantially more efficient than the full Galerkin, polynomial chaos method. Moreover, the stochastic collocation method extends readily to highly nonlinear equations. An important application is to the stochastic Riemann problem, which is of particular interest for spectral discontinuous Galerkin methods.
Parallel Domain Decomposition Methods for Stochastic Elliptic Equations We present parallel Schwarz-type domain decomposition preconditioned recycling Krylov subspace methods for the numerical solution of stochastic elliptic problems, whose coefficients are assumed to be a random field with finite variance. Karhunen-Loève (KL) expansion and double orthogonal polynomials are used to reformulate the stochastic elliptic problem into a large number of related but uncoupled deterministic equations. The key to an efficient algorithm lies in “recycling computed subspaces.” Based on a careful analysis of the KL expansion, we propose and test a grouping algorithm that tells us when to recycle and when to recompute some components of the expensive computation. We show theoretically and experimentally that the Schwarz preconditioned recycling GMRES method is optimal for the entire family of linear systems. A fully parallel implementation is provided, and scalability results are reported in the paper.
Model reduction of variable-geometry interconnects using variational spectrally-weighted balanced truncation This paper presents a spectrally-weighted balanced truncation technique for RLC interconnects, a technique needed when the interconnect circuit parameters change as a result of variations in the manufacturing process. The salient features of this algorithm are the inclusion of parameter variations in the RLC interconnect, the guaranteed stability of the reduced transfer function, and the availability of provable frequency-weighted error bounds for the reduced-order system. This paper shows that the balanced truncation technique is an effective model-order reduction technique when variations in the circuit parameters are taken into consideration. Experimental results show that the new variational spectrally-weighted balanced truncation attains, on average, 20% more accuracy than the variational Krylov-subspace-based model-order reduction techniques while the run-time is also, on average, 5% faster.
Computational Methods for Sparse Solution of Linear Inverse Problems The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications.
Computation and Refinement of Statistical Bounds on Circuit Delay The growing impact of within-die process variation has created the need for statistical timing analysis, where gate delays are modeled as random variables. Statistical timing analysis has traditionally suffered from exponential run time complexity with circuit size, due to arrival time dependencies created by reconverging paths in the circuit. In this paper, we propose a new approach to statistical timing analysis that is based on statistical bounds of the circuit delay. Since these bounds have linear run time complexity with circuit size, they can be computed efficiently for large circuits. Since both a lower and upper bound on the true statistical delay is available, the quality of the bounds can be determined. If the computed bounds are not sufficiently close to each other, we propose a heuristic to iteratively improve the bounds using selective enumeration of the sample space with additional run time. We demonstrate that the proposed bounds have only a small error and that by carefully selecting an small set of nodes for enumeration, this error can be further improved.
Spatio-temporal compressive sensing and internet traffic matrices Many basic network engineering tasks (e.g., traffic engineering, capacity planning, anomaly detection) rely heavily on the availability and accuracy of traffic matrices. However, in practice it is challenging to reliably measure traffic matrices. Missing values are common. This observation brings us into the realm of compressive sensing, a generic technique for dealing with missing values that exploits the presence of structure and redundancy in many real-world systems. Despite much recent progress made in compressive sensing, existing compressive-sensing solutions often perform poorly for traffic matrix interpolation, because real traffic matrices rarely satisfy the technical conditions required for these solutions. To address this problem, we develop a novel spatio-temporal compressive sensing framework with two key components: (i) a new technique called Sparsity Regularized Matrix Factorization (SRMF) that leverages the sparse or low-rank nature of real-world traffic matrices and their spatio-temporal properties, and (ii) a mechanism for combining low-rank approximations with local interpolation procedures. We illustrate our new framework and demonstrate its superior performance in problems involving interpolation with real traffic matrices where we can successfully replace up to 98% of the values. Evaluation in applications such as network tomography, traffic prediction, and anomaly detection confirms the flexibility and effectiveness of our approach.
Intuitionistic fuzzy sets: past, present and future Remarks on history, theory, and appli- cations of intuitionistic fuzzy sets are given. Some open problems are intro- duced.
Restricted Isometries for Partial Random Circulant Matrices In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the sth-order restricted isometry constant is small when the number m of samples satisfies m≳(slogn)3/2, where n is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.244
0.048809
0.005245
0.000882
0.000125
0.000045
0.000024
0.000008
0
0
0
0
0
0
Application of hierarchical matrices for computing the Karhunen–Loève expansion Realistic mathematical models of physical processes contain uncertainties. These models are often described by stochastic differential equations (SDEs) or stochastic partial differential equations (SPDEs) with multiplicative noise. The uncertainties in the right-hand side or the coefficients are represented as random fields. To solve a given SPDE numerically one has to discretise the deterministic operator as well as the stochastic fields. The total dimension of the SPDE is the product of the dimensions of the deterministic part and the stochastic part. To approximate random fields with as few random variables as possible, but still retaining the essential information, the Karhunen–Loève expansion (KLE) becomes important. The KLE of a random field requires the solution of a large eigenvalue problem. Usually it is solved by a Krylov subspace method with a sparse matrix approximation. We demonstrate the use of sparse hierarchical matrix techniques for this. A log-linear computational cost of the matrix-vector product and a log-linear storage requirement yield an efficient and fast discretisation of the random fields presented.
Hierarchical Tensor Approximation of Output Quantities of Parameter-Dependent PDEs Parametric PDEs appear in a large number of applications such as, e.g., uncertainty quantification and optimization. In many cases, one is interested in scalar output quantities induced by the parameter-dependent solution. The output can be interpreted as a tensor living on a high-dimensional parameter space. Our aim is to adaptively construct an approximation of this tensor in a data-sparse hierarchical tensor format. Once this approximation from an offline computation is available, the evaluation of the output for any parameter value becomes a cheap online task. Moreover, the explicit tensor representation can be used to compute stochastic properties of the output in a straightforward way. The potential of this approach is illustrated by numerical examples.
To Be or Not to Be Intrusive? The Solution of Parametric and Stochastic Equations - the "Plain Vanilla" Galerkin Case. In parametric equations-stochastic equations are a special case-one may want to approximate the solution such that it is easy to evaluate its dependence on the parameters. Interpolation in the parameters is an obvious possibility-in this context often labeled as a collocation method. In the frequent situation where one has a "solver" for a given fixed parameter value, this may be used "nonintrusively" as a black-box component to compute the solution at all the interpolation points independently of each other. By extension, all other methods, and especially simple Galerkin methods, which produce some kind of coupled system, are often classed as "intrusive." We show how, for such "plain vanilla" Galerkin formulations, one may solve the coupled system in a nonintrusive way, and even the simplest form of block-solver has comparable efficiency. This opens at least two avenues for possible speed-up: first, to benefit from the coupling in the iteration by using more sophisticated block-solvers and, second, the possibility of nonintrusive successive rank-one updates as in the proper generalized decomposition (PGD).
Low-Rank Tensor Krylov Subspace Methods for Parametrized Linear Systems We consider linear systems $A(\alpha) x(\alpha) = b(\alpha)$ depending on possibly many parameters $\alpha = (\alpha_1,\ldots,\alpha_p)$. Solving these systems simultaneously for a standard discretization of the parameter range would require a computational effort growing drastically with the number of parameters. We show that a much lower computational effort can be achieved for sufficiently smooth parameter dependencies. For this purpose, computational methods are developed that benefit from the fact that $x(\alpha)$ can be well approximated by a tensor of low rank. In particular, low-rank tensor variants of short-recurrence Krylov subspace methods are presented. Numerical experiments for deterministic PDEs with parametrized coefficients and stochastic elliptic PDEs demonstrate the effectiveness of our approach.
Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions For $d$-dimensional tensors with possibly large $d3$, an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has asymptotically the same (and often even smaller) number of representation parameters and viable stability properties. The approach involves a recursive construction described by a tree with the leafs corresponding to the Tucker decompositions of three-dimensional tensors, and is based on a sequence of SVDs for the recursively obtained unfolding matrices and on the auxiliary dimensions added to the initial “spatial” dimensions. It is shown how this format can be applied to the problem of multidimensional convolution. Convincing numerical examples are given.
Time-dependent generalized polynomial chaos Generalized polynomial chaos (gPC) has non-uniform convergence and tends to break down for long-time integration. The reason is that the probability density distribution (PDF) of the solution evolves as a function of time. The set of orthogonal polynomials associated with the initial distribution will therefore not be optimal at later times, thus causing the reduced efficiency of the method for long-time integration. Adaptation of the set of orthogonal polynomials with respect to the changing PDF removes the error with respect to long-time integration. In this method new stochastic variables and orthogonal polynomials are constructed as time progresses. In the new stochastic variable the solution can be represented exactly by linear functions. This allows the method to use only low order polynomial approximations with high accuracy. The method is illustrated with a simple decay model for which an analytic solution is available and subsequently applied to the three mode Kraichnan-Orszag problem with favorable results.
Preconditioning Stochastic Galerkin Saddle Point Systems Mixed finite element discretizations of deterministic second-order elliptic PDEs lead to saddle point systems for which the study of iterative solvers and preconditioners is mature. Galerkin approximation of solutions of stochastic second-order elliptic PDEs, which couple standard mixed finite element discretizations in physical space with global polynomial approximation on a probability space, also give rise to linear systems with familiar saddle point structure. For stochastically nonlinear problems, the solution of such systems presents a serious computational challenge. The blocks are sums of Kronecker products of pairs of matrices associated with two distinct discretizations, and the systems are large, reflecting the curse of dimensionality inherent in most stochastic approximation schemes. Moreover, for the problems considered herein, the leading blocks of the saddle point matrices are block-dense, and the cost of a matrix vector product is nontrivial. We implement a stochastic Galerkin discretization for the steady-state diffusion problem written as a mixed first-order system. The diffusion coefficient is assumed to be a lognormal random field, approximated via a nonlinear function of a finite number of Gaussian random variables. We study the resulting saddle point systems and investigate the efficiency of block-diagonal preconditioners of Schur complement and augmented type for use with the minimal residual method (MINRES). By introducing so-called Kronecker product preconditioners, we improve the robustness of cheap, mean-based preconditioners with respect to the statistical properties of the stochastically nonlinear diffusion coefficients.
Explicit cost bounds of algorithms for multivariate tensor product problems We study multivariate tensor product problems in the worst case and average casesettings. They are defined on functions of d variables. For arbitrary d, we provideexplicit upper bounds on the costs of algorithms which compute an &quot;-approximationto the solution. The cost bounds are of the form(c(d) + 2) fi 1`fi 2 + fi 3ln 1=&quot;d \Gamma 1" fi 4 (d\Gamma1) `1&quot;" fi 5:Here c(d) is the cost of one function evaluation (or one linear functional evaluation),and fi i "s do not...
Statistical Timing Analysis Considering Spatial Correlations using a Single Pert-Like Traversal We present an efficient statistical timing analysis algorithm thatpredicts the probability distribution of the circuit delay while incorporatingthe effects of spatial correlations of intra-die parametervariations, using a method based on principal component analysis.The method uses a PERT-like circuit graph traversal, and hasa run-time that is linear in the number of gates and interconnects,as well as the number of grid partitions used to model spatial correlations.On average, the mean and standard deviation valuescomputed by our method have errors of 0.2% and 0.9%, respectively,in comparison with a Monte Carlo simulation.
On sparse representations in arbitrary redundant bases The purpose of this contribution is to generalize some recent results on sparse representations of signals in redundant bases. The question that is considered is the following: given a matrix A of dimension (n,m) with mn and a vector b=Ax, find a sufficient condition for b to have a unique sparsest representation x as a linear combination of columns of A. Answers to this question are known when A is the concatenation of two unitary matrices and either an extensive combinatorial search is performed or a linear program is solved. We consider arbitrary A matrices and give a sufficient condition for the unique sparsest solution to be the unique solution to both a linear program or a parametrized quadratic program. The proof is elementary and the possibility of using a quadratic program opens perspectives to the case where b=Ax+e with e a vector of noise or modeling errors.
IBM infosphere streams for scalable, real-time, intelligent transportation services With the widespread adoption of location tracking technologies like GPS, the domain of intelligent transportation services has seen growing interest in the last few years. Services in this domain make use of real-time location-based data from a variety of sources, combine this data with static location-based data such as maps and points of interest databases, and provide useful information to end-users. Some of the major challenges in this domain include i) scalability, in terms of processing large volumes of real-time and static data; ii) extensibility, in terms of being able to add new kinds of analyses on the data rapidly, and iii) user interaction, in terms of being able to support different kinds of one-time and continuous queries from the end-user. In this paper, we demonstrate the use of IBM InfoSphere Streams, a scalable stream processing platform, for tackling these challenges. We describe a prototype system that generates dynamic, multi-faceted views of transportation information for the city of Stockholm, using real vehicle GPS and road-network data. The system also continuously derives current traffic statistics, and provides useful value-added information such as shortest-time routes from real-time observed and inferred traffic conditions. Our performance experiments illustrate the scalability of the system. For instance, our system can process over 120000 incoming GPS points per second, combine it with a map containing over 600,000 links, continuously generate different kinds of traffic statistics and answer user queries.
Joint sizing and adaptive independent gate control for FinFET circuits operating in multiple voltage regimes using the logical effort method FinFET has been proposed as an alternative for bulk CMOS in current and future technology nodes due to more effective channel control, reduced random dopant fluctuation, high ON/OFF current ratio, lower energy consumption, etc. Key characteristics of FinFET operating in the sub/near-threshold region are very different from those in the strong-inversion region. This paper first introduces an analytical transregional FinFET model with high accuracy in both sub- and near-threshold regimes. Next, the paper extends the well-known and widely-adopted logical effort delay calculation and optimization method to FinFET circuits operating in multiple voltage (sub/near/super-threshold) regimes. More specifically, a joint optimization of gate sizing and adaptive independent gate control is presented and solved in order to minimize the delay of FinFET circuits operating in multiple voltage regimes. Experimental results on a 32nm Predictive Technology Model for FinFET demonstrate the effectiveness of the proposed logical effort-based delay optimization framework.
Restricted Isometries for Partial Random Circulant Matrices In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the sth-order restricted isometry constant is small when the number m of samples satisfies m≳(slogn)3/2, where n is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling.
Fuzzy management of user actions during hypermedia navigation The recent dramatic advances in the field of multimedia systems has made pacticable the development of an Intelligent Tutoring Multimedia (ITM). In these systems are present hypertextual structures that belongs to the class hypermedia systems. ITM development involves the definition of a suitable navigation model in addition to the other modules of an Intelligent Tutoring System (ITS), i.e. Database module, User module, Interface module, Teaching module. The navigation module receives as inputs the state of the system and the user's current assessment and tries to optimize the fruition of the knowledge base. Moreover, this module is responsible for managing the effects of disorientation and cognitive overhead. In this paper we deal essentially with four topics: 1.(i) to define a fuzzy-based user model able to manage adequately the user's cognitive state, the orientation, and the cognitive overhead;2.(ii) to introduce fuzzy tools within the navigation module in order to carry out moves on the grounds of meaningful data;3.(iii) to define a set of functions that can dynamically infer new states concerning user's interests;4.(iv) to classify the hypermedia actions according to their semantics.
1.072
0.041333
0.027556
0.015111
0.008983
0.003175
0.000311
0.000042
0
0
0
0
0
0
A simple, efficient and near optimal algorithm for compressed sensing When sampling signals below the Nyquist rate, efficient and accurate reconstruction is nevertheless possible, whenever the sampling system is well behaved and the signal is well approximated by a sparse vector. This statement has been formalised in the recently developed theory of compressed sensing, which developed conditions on the sampling system and proved the performance of several efficient algorithms for signal reconstruction under these conditions. In this paper, we prove that a very simple and efficient algorithm, known as Iterative Hard Thresholding, has near optimal performance guarantees rivalling those derived for other state of the art approaches.
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques, and reconstruc- tion accuracy of the same order as that of LP optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean squared error of the reconstruction is upper bounded by constant multiples of the measurement and signal perturbation energies.
Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements - L1-mini- mization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advan- tages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of L1-minimization. Our algorithm ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruc- tion is exact provided the linear measurements satisfy the Uniform Uncertainty Principle.
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For compressible signals, the running time is just O(N log2 N), where N is the length of the signal. In applications, most signals of interest contain scant information relative to their ambient di- mension, but the classical approach to signal acquisition ignores this fact. We usually collect a complete representation of the target signal and process this representation to sieve out the ac- tionable information. Then we discard the rest. Contemplating this ugly inefficiency, one might ask if it is possible instead to acquire compressive samples. In other words, is there some type of measurement that automatically winnows out the information from a signal? Incredibly, the answer is sometimes yes. Compressive sampling refers to the idea that, for certain types of signals, a small number of nonadaptive samples carries sufficient information to approximate the signal well. Research in this area has two major components: Sampling: How many samples are necessary to reconstruct signals to a specified precision? What type of samples? How can these sampling schemes be implemented in practice? Reconstruction: Given the compressive samples, what algorithms can efficiently construct a signal approximation?
Fast and RIP-Optimal Transforms We study constructions of $$k \times n$$k n matrices $$A$$A that both (1) satisfy the restricted isometry property (RIP) at sparsity $$s$$s with optimal parameters, and (2) are efficient in the sense that only $$O(n\log n)$$O(nlogn) operations are required to compute $$Ax$$Ax given a vector $$x$$x. Our construction is based on repeated application of independent transformations of the form $$DH$$DH, where $$H$$H is a Hadamard or Fourier transform and $$D$$D is a diagonal matrix with random $$\{+1,-1\}$${+1,-1} elements on the diagonal, followed by any $$k \times n$$k n matrix of orthonormal rows (e.g. selection of $$k$$k coordinates). We provide guarantees (1) and (2) for a regime of parameters that is comparable with previous constructions, but using a construction that uses Fourier transforms and diagonal matrices only. Our main result can be interpreted as a rate of convergence to a random matrix of a random walk in the orthogonal group, in which each step is obtained by a Fourier transform $$H$$H followed by a random sign change matrix $$D$$D. After a few number of steps, the resulting matrix is random enough in the sense that any arbitrary selection of rows gives rise to an RIP matrix for, sparsity as high as slightly below $$s=\sqrt{n}$$s=n, with high probability. The proof uses a bootstrapping technique that, roughly speaking, says that if a matrix $$A$$A has some suboptimal RIP parameters, then the action of two steps in this random walk on this matrix has improved parameters. This idea is interesting in its own right, and may be used to strengthen other constructions.
Restricted isometry of fourier matrices and list decodability of random linear codes We prove that a random linear code over @@@@q, with probability arbitrarily close to 1, is list decodable at radius 1--1/q -- ε with list size L = O(1/ε2) and rate R = Ωq(ε2/(log3(1/ε))). Up to the polylogarithmic factor in 1/ε and constant factors depending on q, this matches the lower bound L = Ωq(1/ε2) for the list size and upper bound R = Oq(ε2) for the rate. Previously only existence (and not abundance) of such codes was known for the special case q = 2 (Guruswami, Håstad, Sudan and Zuckerman, 2002). In order to obtain our result, we employ a relaxed version of the well known Johnson bound on list decoding that translates the average Hamming distance between codewords to list decoding guarantees. We furthermore prove that the desired average-distance guarantees hold for a code provided that a natural complex matrix encoding the codewords satisfies the Restricted Isometry Property with respect to the Euclidean norm (RIP-2). For the case of random binary linear codes, this matrix coincides with a random submatrix of the Hadamard-Walsh transform matrix that is well studied in the compressed sensing literature. Finally we improve the analysis of Rudelson and Vershynin (2008) on the number of random frequency samples required for exact reconstruction of k-sparse signals of length N. Specifically we improve the number of samples from O(k log (N) log2 (k)(log k+log log N)) to O(k log(N) log3(k)). The proof involves bounding the expected supremum of a related Gaussian process by using an improved analysis of the metric defined by the process. This improvement is crucial for our application in list decoding.
KF-CS: Compressive Sensing on Kalman Filtered Residual We consider the problem of recursively reconstructing time sequences of sparse signals (with unknown and time-varying sparsity patterns) from a limited number of linear incoherent measurements with additive noise. The idea of our proposed solution, KF CS-residual (KF-CS) is to replace compressed sensing (CS) on the observation by CS on the Kalman filtered (KF) observation residual computed using the previous estimate of the support. KF-CS error stability over time is studied. Simulation comparisons with CS and LS-CS are shown.
Stability Results for Random Sampling of Sparse Trigonometric Polynomials Recently, it has been observed that a sparse trigonometric polynomial, i.e., having only a small number of nonzero coefficients, can be reconstructed exactly from a small number of random samples using basis pursuit (BP) or orthogonal matching pursuit (OMP). In this paper, it is shown that recovery by a BP variant is stable under perturbation of the samples values by noise. A similar partial result for OMP is provided. For BP, in addition, the stability result is extended to (nonsparse) trigonometric polynomials that can be well approximated by sparse ones. The theoretical findings are illustrated by numerical experiments.
Just relax: convex programming methods for identifying sparse signals in noise This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis
Uncertainty quantification via random domain decomposition and probabilistic collocation on sparse grids Quantitative predictions of the behavior of many deterministic systems are uncertain due to ubiquitous heterogeneity and insufficient characterization by data. We present a computational approach to quantify predictive uncertainty in complex phenomena, which is modeled by (partial) differential equations with uncertain parameters exhibiting multi-scale variability. The approach is motivated by flow in random composites whose internal architecture (spatial arrangement of constitutive materials) and spatial variability of properties of each material are both uncertain. The proposed two-scale framework combines a random domain decomposition (RDD) and a probabilistic collocation method (PCM) on sparse grids to quantify these two sources of uncertainty, respectively. The use of sparse grid points significantly reduces the overall computational cost, especially for random processes with small correlation lengths. A series of one-, two-, and three-dimensional computational examples demonstrate that the combined RDD-PCM approach yields efficient, robust and non-intrusive approximations for the statistics of diffusion in random composites.
A convex programming approach for generating guaranteed passive approximations to tabulated frequency-data In this paper, we present a methodology for generating guaranteed passive time-domain models of subsystems described by tabulated frequency-domain data obtained through measurement or through physical simulation. Such descriptions are commonly used to represent on- and off-chip interconnect effects, package parasitics, and passive devices common in high-frequency integrated circuit applications. The approach, which incorporates passivity constraints via convex optimization algorithms, is guaranteed to produce a passive-system model that is optimal in the sense of having minimum error in the frequency band of interest over all models with a prescribed set of system poles. We demonstrate that this algorithm is computationally practical for generating accurate high-order models of data sets representing realistic, complicated multiinput, multioutput systems.
A simple Cooperative diversity method based on network path selection Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this "best" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.
Noniterative MAP reconstruction using sparse matrix representations. We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
On the Rekeying Load in Group Key Distributions Using Cover-Free Families Key distributions based on cover-free families have been recently proposed for secure rekeying in group communication systems after multiple simultaneous user ejections. Existing literature has not quantified how difficult this rekeying operation might be. This study provides upper bounds on the number messages necessary to rekey a key distribution based on symmetric combinatorial designs after one or two simultaneous user ejections. Connections are made to results from finite geometry to show that these bounds are tight for certain key distributions. It is shown that in general determining the minimal number of messages necessary to rekey a group communication system based on a cover-free family is NP-hard.
1.20078
0.011229
0.006909
0.004782
0.000738
0.000368
0.0002
0.000124
0.000052
0.000006
0
0
0
0
Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.
On a near optimal sampling strategy for least squares polynomial regression. We present a sampling strategy of least squares polynomial regression. The strategy combines two recently developed methods for least squares method: Christoffel least squares algorithm and quasi-optimal sampling. More specifically, our new strategy first choose samples from the pluripotential equilibrium measure and then re-order the samples by the quasi-optimal algorithm. A weighted least squares problem is solved on a (much) smaller sample set to obtain the regression result. It is then demonstrated that the new strategy results in a polynomial least squares method with high accuracy and robust stability at almost minimal number of samples.
Reweighted ℓ1ℓ1 minimization method for stochastic elliptic differential equations. We consider elliptic stochastic partial differential equations (SPDEs) with random coefficients and solve them by expanding the solution using generalized polynomial chaos (gPC). Under some mild conditions on the coefficients, the solution is “sparse” in the random space, i.e., only a small number of gPC basis makes considerable contribution to the solution. To exploit this sparsity, we employ reweighted l1 minimization to recover the coefficients of the gPC expansion. We also combine this method with random sampling points based on the Chebyshev probability measure to further increase the accuracy of the recovery of the gPC coefficients. We first present a one-dimensional test to demonstrate the main idea, and then we consider 14 and 40 dimensional elliptic SPDEs to demonstrate the significant improvement of this method over the standard l1 minimization method. For moderately high dimensional (∼10) problems, the combination of Chebyshev measure with reweighted l1 minimization performs well while for higher dimensional problems, reweighted l1 only is sufficient. The proposed approach is especially suitable for problems for which the deterministic solver is very expensive since it reuses the sampling results and exploits all the information available from limited sources.
A CHRISTOFFEL FUNCTION WEIGHTED LEAST SQUARES ALGORITHM FOR COLLOCATION APPROXIMATIONS We propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the ( weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis to motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.
Adaptive sparse polynomial chaos expansion based on least angle regression Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called polynomial chaos) basis. The number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e. of Galerkin type) or non intrusive) unaffordable when the deterministic finite element model is expensive to evaluate. To address such problems, the paper describes a non intrusive method that builds a sparse PC expansion. First, an original strategy for truncating the PC expansions, based on hyperbolic index sets, is proposed. Then an adaptive algorithm based on least angle regression (LAR) is devised for automatically detecting the significant coefficients of the PC expansion. Beside the sparsity of the basis, the experimental design used at each step of the algorithm is systematically complemented in order to avoid the overfitting phenomenon. The accuracy of the PC metamodel is checked using an estimate inspired by statistical learning theory, namely the corrected leave-one-out error. As a consequence, a rather small number of PC terms are eventually retained (sparse representation), which may be obtained at a reduced computational cost compared to the classical ''full'' PC approximation. The convergence of the algorithm is shown on an analytical function. Then the method is illustrated on three stochastic finite element problems. The first model features 10 input random variables, whereas the two others involve an input random field, which is discretized into 38 and 30-500 random variables, respectively.
Adaptive ANOVA decomposition of stochastic incompressible and compressible flows Realistic representation of stochastic inputs associated with various sources of uncertainty in the simulation of fluid flows leads to high dimensional representations that are computationally prohibitive. We investigate the use of adaptive ANOVA decomposition as an effective dimension-reduction technique in modeling steady incompressible and compressible flows with nominal dimension of random space up to 100. We present three different adaptivity criteria and compare the adaptive ANOVA method against sparse grid, Monte Carlo and quasi-Monte Carlo methods to evaluate its relative efficiency and accuracy. For the incompressible flow problem, the effect of random temperature boundary conditions (modeled as high-dimensional stochastic processes) on the Nusselt number is investigated for different values of correlation length. For the compressible flow, the effects of random geometric perturbations (simulating random roughness) on the scattering of a strong shock wave is investigated both analytically and numerically. A probabilistic collocation method is combined with adaptive ANOVA to obtain both incompressible and compressible flow solutions. We demonstrate that for both cases even draconian truncations of the ANOVA expansion lead to accurate solutions with a speed-up factor of three orders of magnitude compared to Monte Carlo and at least one order of magnitude compared to sparse grids for comparable accuracy.
An Anisotropic Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data This work proposes and analyzes an anisotropic sparse grid stochastic collocation method for solving partial differential equations with random coefficients and forcing terms (input data of the model). The method consists of a Galerkin approximation in the space variables and a collocation, in probability space, on sparse tensor product grids utilizing either Clenshaw-Curtis or Gaussian knots. Even in the presence of nonlinearities, the collocation approach leads to the solution of uncoupled deterministic problems, just as in the Monte Carlo method. This work includes a priori and a posteriori procedures to adapt the anisotropy of the sparse grids to each given problem. These procedures seem to be very effective for the problems under study. The proposed method combines the advantages of isotropic sparse collocation with those of anisotropic full tensor product collocation: the first approach is effective for problems depending on random variables which weigh approximately equally in the solution, while the benefits of the latter approach become apparent when solving highly anisotropic problems depending on a relatively small number of random variables, as in the case where input random variables are Karhunen-Loève truncations of “smooth” random fields. This work also provides a rigorous convergence analysis of the fully discrete problem and demonstrates (sub)exponential convergence in the asymptotic regime and algebraic convergence in the preasymptotic regime, with respect to the total number of collocation points. It also shows that the anisotropic approximation breaks the curse of dimensionality for a wide set of problems. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo. In particular, for moderately large-dimensional problems, the sparse grid approach with a properly chosen anisotropy seems to be very efficient and superior to all examined methods.
Compressed Sensing: How Sharp Is the Restricted Isometry Property? Compressed sensing (CS) seeks to recover an unknown vector with $N$ entries by making far fewer than $N$ measurements; it posits that the number of CS measurements should be comparable to the information content of the vector, not simply $N$. CS combines directly the important task of compression with the measurement task. Since its introduction in 2004 there have been hundreds of papers on CS, a large fraction of which develop algorithms to recover a signal from its compressed measurements. Because of the paradoxical nature of CS—exact reconstruction from seemingly undersampled measurements—it is crucial for acceptance of an algorithm that rigorous analyses verify the degree of undersampling the algorithm permits. The restricted isometry property (RIP) has become the dominant tool used for the analysis in such cases. We present here an asymmetric form of RIP that gives tighter bounds than the usual symmetric one. We give the best known bounds on the RIP constants for matrices from the Gaussian ensemble. Our derivations illustrate the way in which the combinatorial nature of CS is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners. We also document the extent to which RIP gives precise information about the true performance limits of CS, by comparison with approaches from high-dimensional geometry.
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K/logN)-r, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
On the optimum of Delsarte's linear program We are interested in the maximal size A ( n ,  d ) of a binary error-correcting code of length n and distance d , or, alternatively, in the best packing of balls of radius ( d −1)/2 in the n -dimensional Hamming space. The best known lower bound on A ( n ,  d ) is due to Gilbert and Varshamov and is obtained by a covering argument. The best know upper bound is due to McEliece, Rodemich, Rumsey, and Welch, and is obtained using Delsarte's linear programming approach. It is not known whether this is the best possible bound one can obtain from Delsarte's linear program. We show that the optimal upper bound obtainable from Delsarte's linear program will strictly exceed the Gilbert–Varshamov lower bound. In fact, it will be at least as big as the average of the Gilbert–Varshamov bound and the McEliece, Rodemich, Rumsey, and Welch upper bound. Similar results hold for constant weight binary codes. The average of the Gilbert–Varshamov bound and the McEliece, Rodemich, Rumsey, and Welch upper bound might be the true value of Delsarte's bound. We provide some evidence for this conjecture.
On intuitionistic gradation of openness In this paper, we introduce a concept of intuitionistic gradation of openness on fuzzy subsets of a nonempty set X and define an intuitionistic fuzzy topological space. We prove that the category of intuitionistic fuzzy topological spaces and gradation preserving mappings is a topological category. We study compactness of intuitionistic fuzzy topological spaces and prove an analogue of Tychonoff's theorem.
Stochastic computational models for accurate reliability evaluation of logic circuits As reliability becomes a major concern with the continuous scaling of CMOS technology, several computational methodologies have been developed for the reliability evaluation of logic circuits. Previous accurate analytical approaches, however, have a computational complexity that generally increases exponentially with the size of a circuit, making the evaluation of large circuits intractable. This paper presents novel computational models based on stochastic computation, in which probabilities are encoded in the statistics of random binary bit streams, for the reliability evaluation of logic circuits. A computational approach using the stochastic computational models (SCMs) accurately determines the reliability of a circuit with its precision only limited by the random fluctuations inherent in the representation of random binary bit streams. The SCM approach has a linear computational complexity and is therefore scalable for use for any large circuits. Our simulation results demonstrate the accuracy and scalability of the SCM approach, and suggest its possible applications in VLSI design.
Fuzzy OWA model for information security risk management One of the methods for information security risk assessment is the substantiated choice and realization of countermeasures against threats. A situational fuzzy OWA model of a multicriteria decision making problem concerning the choice of countermeasures for reducing information security risks is proposed. The proposed model makes it possible to modify the associated weights of criteria based on the information entropy with respect to the aggregation situation. The advantage of the model is the continuous improvement of the weights of the criteria and the aggregation of experts’ opinions depending on the parameter characterizing the aggregation situation.
1.052159
0.055
0.017272
0.011
0.003921
0.0022
0.000528
0.00011
0.000007
0
0
0
0
0
Preference Modelling ABSTRACT This paper provides the reader with a presentation of preference modelling fundamental notions as well as some recent results in this field. Preference modelling is an inevitable step in a variety of fields: economy, sociology, psychology, mathematical programming, even medicine, archaeology, and obviously decision analysis. Our notation and some basic definitions, such as those of binary relation, properties and ordered sets, are presented at the beginning of the paper. We start by discussing different reasons for constructing a model or preference. We then go through a number,of issues that influence the construction of preference models. Different formalisations besides classical logic such as fuzzy sets and non-classical logics become,necessary. We then present different types of preference structures reflecting the behavior of a decision-maker: classical, extended and valued ones. It is relevant to have a numerical representation of preferences: functional representations, value functions. The concepts of thresholds and minimal representation are also introduced in this section. In section 7, we briefly explore the concept of deontic logic (logic of preference) and other formalisms associated with "compact representation of preferences" introduced for spe-
Consistent models of transitivity for reciprocal preferences on a finite ordinal scale In this paper we consider a decision maker who shows his/her preferences for different alternatives through a finite set of ordinal values. We analyze the problem of consistency taking into account some transitivity properties within this framework. These properties are based on the very general class of conjunctors on the set of ordinal values. Each reciprocal preference relation on a finite ordinal scale has both a crisp preference and a crisp indifference relation associated to it in a natural way. Taking this into account, we have started by analyzing the problem of propagating transitivity from the preference relation on a finite ordinal scale to the crisp preference and indifference relations. After that, we carried out the analysis in the opposite direction. We provide some necessary and sufficient conditions for that propagation, and therefore, we characterize the consistent class of conjunctors in each direction.
Defining the Borda count in a linguistic decision making context Different kinds of decision rules have been successfully implemented under a linguistic approach. This paper aims the same goal for the Borda count, a well-known procedure with some interesting features. In order to this, two ways of extension from the Borda rule to a linguistic framework are proposed taking into account all the agents' opinions or only the favorable ones for each alternative when compared with each other. In the two cases, both individual and collective Borda counts are analyzed, asking for properties as good as those of the original patterns.
From Computing with Numbers to Computing with Words - From Manipulation of Measurements to Manipulation of Perceptions Interest in issues relating to consciousness has grown markedly during the last several years. And yet, nobody can claim that consciousness is a well-understood concept that lends itself to precise analysis. It may be argued that, as a concept, consciousness is much too complex to fit into the conceptual structure of existing theories based on Aristotelian logic and probability theory. An approach suggested in this paper links consciousness to perceptions and perceptions to their descriptors in a natural language. In this way, those aspects of consciousness which relate to reasoning and concept formation are linked to what is referred to as the methodology of computing with words (CW). Computing, in its usual sense, is centered on manipulation of numbers and symbols. In contrast, computing with words, or CW for short, is a methodology in which the objects of computation are words and propositions drawn from a natural language (e.g., small, large, far, heavy, not very likely, the price of gas is low and declining, Berkeley is near San Francisco, it is very unlikely that there will be a significant increase in the price of oil in the near future, etc.). Computing with words is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements and any computations. Familiar examples of such tasks are parking a car, driving in heavy traffic, playing golf, riding a bicycle, understanding speech, and summarizing a story. Underlying this remarkable capability is the brain's crucial ability to manipulate perceptions--perceptions of distance, size, weight, color, speed, time, direction, force, number, truth, likelihood, and other characteristics of physical and mental objects. Manipulation of perceptions plays a key role in human recognition, decision and execution processes. As a methodology, computing with words provides a foundation for a computational theory of perceptions: a theory which may have an important bearing on how humans make--and machines might make--perception-based rational decisions in an environment of imprecision, uncertainty, and partial truth. A basic difference between perceptions and measurements is that, in general, measurements are crisp, whereas perceptions are fuzzy. One of the fundamental aims of science has been and continues to be that of progressing from perceptions to measurements. Pursuit of this aim has led to brilliant successes. We have sent men to the moon; we can build computers that are capable of performing billions of computations per second; we have constructed telescopes that can explore the far reaches of the universe; and we can date the age of rocks that are millions of years old. But alongside the brilliant successes stand conspicuous underachievements and outright failures. We cannot build robots that can move with the agility of animals or humans; we cannot automate driving in heavy traffic; we cannot translate from one language to another at the level of a human interpreter; we cannot create programs that can summarize non-trivial stories; our ability to model the behavior of economic systems leaves much to be desired; and we cannot build machines that can compete with children in the performance of a wide variety of physical and cognitive tasks. It may be argued that underlying the underachievements and failures is the unavailability of a methodology for reasoning and computing with perceptions rather than measurements. An outline of such a methodology--referred to as a computational theory of perceptions--is presented in this paper. The computational theory of perceptions (CTP) is based on the methodology of CW. In CTP, words play the role of labels of perceptions, and, more generally, perceptions are expressed as propositions in a natural language. CW-based techniques are employed to translate propositions expressed in a natural language into what is called the Generalized Constraint Language (GCL). In this language, the meaning of a proposition is expressed as a generalized constraint, X isr R, where X is the constrained variable, R is the constraining relation, and isr is a variable copula in which r is an indexing variable whose value defines the way in which R constrains X. Among the basic types of constraints are possibilistic, veristic, probabilistic, random set, Pawlak set, fuzzy graph, and usuality. The wide variety of constraints in GCL makes GCL a much more expressive language than the language of predicate logic. In CW, the initial and terminal data sets, IDS and TDS, are assumed to consist of propositions expressed in a natural language. These propositions are translated, respectively, into antecedent and consequent constraints. Consequent constraints are derived from antecedent constraints through the use of rules of constraint propagation. The principal constraint propagation rule is the generalized extension principle. (ABSTRACT TRUNCATED)
Linguistic decision analysis: steps for solving decision problems under linguistic information A study on the steps to follow in linguistic decision analysis is presented in a context of multi-criteria/multi-person decision making. Three steps are established for solving a multi-criteria decision making problem under linguistic information: (i) the choice of the linguistic term set with its semantic in order to express the linguistic performance values according to all the criteria, (ii) the choice of the aggregation operator of linguistic information in order to aggregate the linguistic performance values, and (iii) the choice of the best alternatives, which is made up by two phases: (a) the aggregation of linguistic information for obtaining a collective linguistic performance value on the alternatives, and (b) the exploitation of the collective linguistic performance value in order to establish a rank ordering among the alternatives for choosing the best alternatives. Finally, an example is shown.
Applying a direct multi-granularity linguistic and strategy-oriented aggregation approach on the assessment of supply performance Supply performance has the active continuity behaviors, which covers the past, present and future of time horizons. Thus, supply performance possesses distinct uncertainty on individual behavior, which is inadequate to assess with quantification. This study utilizes the linguistic variable instead of numerical variable to offset the inaccuracy on quantification, and employs the fitting linguistic scale in accordance with the characteristic of supply behavior to enhance the applicability. Furthermore, the uniformity is introduced to transform the linguistic information uniformly from different scales. Finally, the linguistic ordered weighted averaging operator with maximal entropy applies in direct to aggregate the combination of linguistic information and product strategy to ensure the assessment results meeting the enterprise requirements, and then to emulate mental decision making in humans by the linguistic manner.
Fuzzy Linguistic PERT A model for Program Evaluation and Review Technique (PERT) under fuzzy linguistic contexts is introduced. In this fuzzy linguistic PERT network model, each activity duration is represented by a fuzzy linguistic description. Aggregation and comparison of the estimated linguistic expectations of activity durations are manipulated by the techniques of computing with words (CW). To provide suitable contexts for this purpose, we first introduce several variations of basic linguistic labels of a linguistic variable, such as weighted linguistic labels, generalized linguistic labels and weighted generalized linguistic labels etc., and then based on the notion of canonical characteristic value (CCV) function of a linguistic variable, we develop some related CW techniques for aggregation and comparison of these linguistic labels. Afterward, using a computing technique of linguistic probability introduced by Zadeh and based on the new developed CW techniques for weighted generalized linguistic labels, we investigate the associated linguistic expectation PERT network of a fuzzy linguistic PERT network. Also, throughout the paper, several examples are used to illustrate related notions and applications
Belief rule-base inference methodology using the evidential reasoning Approach-RIMER In this paper, a generic rule-base inference methodology using the evidential reasoning (RIMER) approach is proposed. Existing knowledge-base structures are first examined, and knowledge representation schemes under uncertainty are then briefly analyzed. Based on this analysis, a new knowledge representation scheme in a rule base is proposed using a belief structure. In this scheme, a rule base is designed with belief degrees embedded in all possible consequents of a rule. Such a rule base is capable of capturing vagueness, incompleteness, and nonlinear causal relationships, while traditional if-then rules can be represented as a special case. Other knowledge representation parameters such as the weights of both attributes and rules are also investigated in the scheme. In an established rule base, an input to an antecedent attribute is transformed into a belief distribution. Subsequently, inference in such a rule base is implemented using the evidential reasoning (ER) approach. The scheme is further extended to inference in hierarchical rule bases. A numerical study is provided to illustrate the potential applications of the proposed methodology.
Artificial Paranoia
A construction of sound semantic linguistic scales using 4-tuple representation of term semantics. Data semantics plays a fundamental role in computer science, in general, and in computing with words, in particular. The semantics of words arises as a sophisticated problem, since words being actually vague linguistic terms are pieces of information characterized by impreciseness, incompleteness, uncertainty and/or vagueness. The qualitative semantics and the quantitative semantics are two aspects of vague linguistic information, which are closely related. However, the qualitative semantics of linguistic terms, and even the qualitative semantics of the symbolic approaches, seem to be not elaborated on directly in the literature. In this study, we propose an interpretation of the inherent order-based semantics of terms through their qualitative semantics modeled by hedge algebra structures. The quantitative semantics of terms are developed based on the quantification of hedge algebras. With this explicit approach, we propose two concepts of assessment scales to address decision problems: linguistic scales used for representing expert linguistic assessments and semantic linguistic scales based on 4-tuple linguistic representation model, which forms a formalized structure useful for computing with words. An example of a simple multi-criteria decision problem is examined by running a comparative study. We also analyze the main advantages of the proposed approach.
Fuzzy spatial relationships for image processing and interpretation: a review In spatial reasoning, relationships between spatial entities play a major role. In image interpretation, computer vision and structural recognition, the management of imperfect information and of imprecision constitutes a key point. This calls for the framework of fuzzy sets, which exhibits nice features to represent spatial imprecision at different levels, imprecision in knowledge and knowledge representation, and which provides powerful tools for fusion, decision-making and reasoning. In this paper, we review the main fuzzy approaches for defining spatial relationships including topological (set relationships, adjacency) and metrical relations (distances, directional relative position).
Guidelines for Constructing Reusable Domain Ontologies The growing interest in ontologies is concomitant with the increasing use of agent systems in user environment. On- tologies have established themselves as schemas for encoding knowledge about a particular domain, which can be inter- preted by both humans and agents to accomplish a task in cooperation. However, construction of the domain ontolo- gies is a bottleneck, and planning towards reuse of domain ontologies is essential. Current methodologies concerned with ontology development have not dealt with explicit reuse of domain ontologies. This paper presents guidelines for systematic construction of reusable domain ontologies. A purpose-driven approach has been adopted. The guidelines have been used for constructing ontologies in the Experi- mental High-Energy Physics domain.
Effective corner-based techniques for variation-aware IC timing verification Traditional integrated circuit timing sign-off consists of verifying a design for a set of carefully chosen combinations of process and operating parameter extremes, referred to as corners. Such corners are usually chosen based on the knowledge of designers and process engineers, and are expected to cover the worst-case fabrication and operating scenarios. With increasingly more detailed attention to variability, the number of potential conditions to examine can be exponentially large, more than is possible to handle with straightforward exhaustive analysis. This paper presents efficient yet exact techniques for computing worstdelay and worst-slack corners of combinational and sequential digital integrated circuits. Results show that the proposed techniques enable efficient and accurate detection of failing conditions while accounting for timing variability due to process variations.
Bacterial Community Reconstruction Using A Single Sequencing Reaction Bacteria are the unseen majority on our planet, with millions of species and comprising most of the living protoplasm. While current methods enable in-depth study of a small number of communities, a simple tool for breadth studies of bacterial population composition in a large number of samples is lacking. We propose a novel approach for reconstruction of the composition of an unknown mixture of bacteria using a single Sanger-sequencing reaction of the mixture. This method is based on compressive sensing theory, which deals with reconstruction of a sparse signal using a small number of measurements. Utilizing the fact that in many cases each bacterial community is comprised of a small subset of the known bacterial species, we show the feasibility of this approach for determining the composition of a bacterial mixture. Using simulations, we show that sequencing a few hundred base-pairs of the 16S rRNA gene sequence may provide enough information for reconstruction of mixtures containing tens of species, out of tens of thousands, even in the presence of realistic measurement noise. Finally, we show initial promising results when applying our method for the reconstruction of a toy experimental mixture with five species. Our approach may have a potential for a practical and efficient way for identifying bacterial species compositions in biological samples.
1.110526
0.056568
0.055347
0.002673
0.001033
0.000328
0.000178
0.000099
0.00002
0.000002
0
0
0
0
A Quality-of-Experience Index for Streaming Video. With the rapid growth of streaming media applications, there has been a strong demand of quality-of-experience (QoE) measurement and QoE-driven video delivery technologies. Most existing methods rely on bitrate and global statistics of stalling events for QoE prediction. This is problematic for two reasons. First, using the same bitrate to encode different video content results in drastically diff...
Analysis and design of the google congestion control for web real-time communication (WebRTC) Video conferencing applications require low latency and high bandwidth. Standard TCP is not suitable for video conferencing since its reliability and in order delivery mechanisms induce large latency. Recently the idea of using the delay gradient to infer congestion is appearing again and is gaining momentum. In this paper we present an algorithm that is based on estimating through a Kalman filter the end-to-end one way delay variation which is experienced by packets traveling from a sender to a destination. This estimate is compared to an adaptive threshold to dynamically throttle the sending rate. The control algorithm has been implemented over the RTP/RTCP protocol and is currently used in Google Hangouts and in the Chrome WebRTC stack. Experiments have been carried out to evaluate the algorithm performance in the case of variable link capacity, presence of heterogeneous or homogeneous concurrent traffic, and backward path traffic.
C3: Internet-Scale Control Plane for Video Quality Optimization.
Sara: Segment Aware Rate Adaptation Algorithm For Dynamic Adaptive Streaming Over Http Dynamic adaptive HTTP (DASH) based streaming is steadily becoming the most popular online video streaming technique. DASH streaming provides seamless playback by adapting the video quality to the network conditions during the video playback. A DASH server supports adaptive streaming by hosting multiple representations of the video and each representation is divided into small segments of equal playback duration. At the client end, the video player uses an adaptive bitrate selection (ABR) algorithm to decide the bitrate to be selected for each segment depending on the current network conditions. Currently, proposed ABR algorithms ignore the fact that the segment sizes significantly vary for a given video bitrate. Due to this, even though an ABR algorithm is able to measure the network bandwidth, it may fail to predict the time to download the next segment. In this paper, we propose a segment-aware rate adaptation (SARA) algorithm that considers the segment size variation in addition to the estimated path bandwidth and the current buffer occupancy to accurately predict the time required to download the next segment. We also developed an open source Python based emulated DASH video player, that was used to compare the performance of SARA and a basic ABR. Our results show that SARA provides a significant gain over the basic algorithm in the video quality delivered, without noticeably impacting the video switching rates.
QoE-Based SVC Layer Dropping in LTE Networks Using Content-Aware Layer Priorities The increasing popularity of mobile video streaming applications has led to a high volume of video traffic in mobile networks. As the base station, for instance, the eNB in LTE networks, has limited physical resources, it can be overloaded by this traffic. This problem can be addressed by using Scalable Video Coding (SVC), which allows the eNB to drop layers of the video streams to dynamically adapt the bitrate. The impact of bitrate adaptation on the Quality of Experience (QoE) for the users depends on the content characteristics of videos. As the current mobile network architectures do not support the eNB in obtaining video content information, QoE optimization schemes with explicit signaling of content information have been proposed. These schemes, however, require the eNB or a specific optimization module to process the video content on the fly in order to extract the required information. This increases the computation and signaling overhead significantly, raising the OPEX for mobile operators. To address this issue, in this article, a content-aware (CA) priority marking and layer dropping scheme is proposed. The CA priority indicates a transmission order for the layers of all transmitted videos across all users, resulting from a comparison of their utility versus rate characteristics. The CA priority values can be determined at the P-GW on the fly, allowing mobile operators to control the priority marking process. Alternatively, they can be determined offline at the video servers, avoiding real-time computation in the core network. The eNB can perform content-aware SVC layer dropping using only the priority values. No additional content processing is required. The proposed scheme is lightweight both in terms of architecture and computation. The improvement in QoE is substantial and very close to the performance obtained with the computation and signaling-intensive QoE optimization schemes.
Congestion-aware edge caching for adaptive video streaming in Information-Centric Networks This paper proposes a network-aware resource management scheme that improves the quality of experience (QoE) for adaptive video streaming in CDNs and Information-Centric Networks (ICN) in general, and Dynamic Adaptive Streaming over HTTP (DASH) in particular. By utilizing the DASH manifest, the network (by way of a logically centralized controller) computes the available link resources and schedules the chunk dissemination to edge caches ahead of the end-user's requests. Our approach is optimized for multi-rate DASH videos. We implemented our resource management scheme, and demonstrated that in the scenario when network conditions evolve quickly, our approach can maintain smooth high quality playback. We show on actual video server data and in our own simulation environment that a significant reduction in peak bandwidth of 20% can be achieved using our approach.
Adaptive Bitrate Selection: A Survey. HTTP adaptive streaming (HAS) is the most recent attempt regarding video quality adaptation. It enables cheap and easy to implement streaming technology without the need for a dedicated infrastructure. By using a combination of TCP and HTTP it has the advantage of reusing all the existing technologies designed for ordinary web. Equally important is that HAS traffic passes through firewalls and wor...
HDTV Subjective Quality of H.264 vs. MPEG-2, With and Without Packet Loss The intent of H.264 (MPEG-4 Part 10) was to achieve equivalent quality to previous standards (e.g., MPEG-2) at no more than half the bit-rate. H.264 is commonly felt to have achieved this objective. This document presents results of an HDTV subjective experiment that compared the perceptual quality of H.264 to MPEG-2. The study included both the coding-only impairment case and a coding plus packet loss case, where the packet loss was representative of a well managed network (0.02% random packet loss rate). Subjective testing results partially uphold the commonly held claim that H.264 provides quality similar to MPEG-2 at no more than half the bit rate for the coding-only case. However, the advantage of H.264 diminishes with increasing bit rate and all but disappears when one reaches about 18 Mbps. For the packet loss case, results from the study indicate that H.264 suffers a large decrease in quality whereas MPEG-2 undergoes a much smaller decrease.
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Which logic is the real fuzzy logic? This paper is a contribution to the discussion of the problem, whether there is a fuzzy logic that can be considered as the real fuzzy logic. We give reasons for taking IMTL, BL, L@P and Ev"L (fuzzy logic with evaluated syntax) as those fuzzy logics that should be indeed taken as the real fuzzy logics.
Graphoid properties of qualitative possibilistic independence relations Independence relations play an important role in uncertain reasoning based on Bayesian networks. In particular, they are useful in decomposing joint distributions into more elementary local ones. Recently, in a possibility theory framework, several qualitative independence relations have been proposed, where uncertainty is encoded by means of a complete pre-order between states of the world. This paper studies the well-known graphoid properties of these qualitative independences. Contrary to the probabilistic independence, several qualitative independence relations are not necessarily symmetric. Therefore, we also analyze the symmetric counterparts of graphoid properties (called reverse graphoid properties).
The enhancing of efficiency of the harmonic balance analysis by adaptation of preconditioner to circuit nonlinearity Krylov subspace techniques in harmonic balance simulations become increasingly ineffective when applied to strongly nonlinear circuits. This limitation is particularly important in the simulation if the circuit has components being operated in a very nonlinear region. Even if the circuit contains only a few very nonlinear components, Krylov methods using standard preconditioners can become ineffective. To overcome this problem, we present two adaptive preconditioners that dynamically exploit the properties of the harmonic balance Jacobian. With these techniques we have been able to retain the advantages of Krylov methods even for strongly nonlinear circuits. Some numerical experiments illustrating the techniques are presented.
Reasoning and Learning in Probabilistic and Possibilistic Networks: An Overview Graphical modelling is a powerful framework for reasoning under uncertainty. We give an overview on the semantical background and relevant properties of probabilistic and possibilistic networks, respectively, and consider knowledge representation and independence as well as evidence propagation and learning such networks from data.
Generalised Interval-Valued Fuzzy Soft Set. We introduce the concept of generalised interval-valued fuzzy soft set and its operations and study some of their properties. We give applications of this theory in solving a decision making problem. We also introduce a similarity measure of two generalised interval-valued fuzzy soft sets and discuss its application in a medical diagnosis problem: fuzzy set; soft set; fuzzy soft set; generalised fuzzy soft set; generalised interval-valued fuzzy soft set; interval-valued fuzzy set; interval-valued fuzzy soft set.
1.053
0.025
0.013
0.008515
0.002
0.001
0.0005
0.000043
0
0
0
0
0
0
Uncertain probabilities III: the continuous case We consider probability density functions where some of the parameters are uncertain. We model these uncertainties using fuzzy numbers producing fuzzy probability density functions. In particular, we look at the fuzzy normal, fuzzy uniform, and the fuzzy negative exponential and show how to use them to compute fuzzy probabilities. We also use the fuzzy normal to approximate the fuzzy binomial. Our application is to inventory control (the economic order quantity model) where demand is given by a fuzzy normal probability density.
Uncertain probabilities I: the discrete case We consider discrete (finite) probability distributions where some of the probability values are uncertain. We model these uncertainties using fuzzy numbers. Then, employing restricted fuzzy arithmetic, we derive the basic laws of fuzzy (uncertain) probability theory. Applications are to the binomial probability distribution and queuing theory.
A theory of independent fuzzy probability for system reliability Fuzzy fault trees provide a powerful and computationally efficient technique for developing fuzzy probabilities based on independent inputs. The probability of any event that can be described in terms of a sequence of independent unions, intersections, and complements may be calculated by a fuzzy fault tree. Unfortunately, fuzzy fault trees do not provide a complete theory: many events of substantial practical interest cannot be described only by independent operations. Thus, the standard fuzzy extension (based on fuzzy fault trees) is not complete since not all events are assigned a fuzzy probability. Other complete extensions have been proposed, but these extensions are not consistent with the calculations from fuzzy fault trees. We propose a new extension of crisp probability theory. Our model is based on n independent inputs, each with a fuzzy probability. The elements of our sample space describe exactly which of the n input events did and did not occur. Our extension is complete since a fuzzy probability is assigned to every subset of the sample space. Our extension is also consistent with all calculations that can be arranged as a fault tree. Our approach allows the reliability analyst to develop complete and consistent fuzzy reliability models from existing crisp reliability models. This allows a comprehensive analysis of the system. Computational algorithms are provided both to extend existing models and develop new models. The technique is demonstrated on a reliability model of a three-stage industrial process
Constrained fuzzy arithmetic: Basic questions and some answers The purpose of this paper is to critically examine the use of fuzzy arithmetic in dealing with fuzzy systems. It is argued that the well-known overestimation and other questionable results of standard fuzzy arithmetic have one common cause: constraints regarding linguistic variables involved are not taken into account. A general formulation of constrained fuzzy arithmetic – a nonstandard fuzzy arithmetic that takes into account these constraints – is presented and its basic characteristics are examined. More specific characteristics of constrained fuzzy arithmetic are then investigated for some common types of constraints.
From approximative to descriptive fuzzy classifiers This paper presents an effective and efficient approach for translating fuzzy classification rules that use approximative sets to rules that use descriptive sets and linguistic hedges of predefined meaning. It works by first generating rules that use approximative sets from training data, and then translating the resulting approximative rules into descriptive ones. Hedges that are useful for supporting such translations are provided. The translated rules are functionally equivalent to the original approximative ones, or a close equivalent given search time restrictions, while reflecting their underlying preconceived meaning. Thus, fuzzy, descriptive classifiers can be obtained by taking advantage of any existing approach to approximative modeling, which is generally efficient and accurate, while employing rules that are comprehensible to human users. Experimental results are provided and comparisons to alternative approaches given.
A Note on Fuzzy Sets
A design methodology for fuzzy system interfaces Conceptually, a fuzzy system interacting with a numerical environment has three components: a numeric/linguistic interface, a linguistic processing unit, and a linguistic/numeric interface. At these interfaces, membership functions representing linguistic terms play a top role both for the linguistic meaning provided and for the pre/post information processing introduced to the fuzzy system. Considering these issues, a set of membership function properties is postulated. Furthermore, an expert-free interface design methodology able to meet these properties, and based on the concept of optimal interfaces, is proposed. This concept simply states an equivalence between information format (numeric and linguistic), thereby making the methodology appealing from the applicational point of view. An algorithm is developed, and brief notes on selected applications are outlined stressing relevant issues of the proposed methodology
Universally composable security: a new paradigm for cryptographic protocols We propose a novel paradigm for defining security of cryptographic protocols, called universally composable security. The salient property of universally composable definitions of security is that they guarantee security even when a secure protocol is composed of an arbitrary set of protocols, or more generally when the protocol is used as a component of an arbitrary system. This is an essential property for maintaining security of cryptographic protocols in complex and unpredictable environments such as the Internet. In particular, universally composable definitions guarantee security even when an unbounded number of protocol instances are executed concurrently in an adversarially controlled manner, they guarantee non-malleability with respect to arbitrary protocols, and more. We show how to formulate universally composable definitions of security for practically any cryptographic task. Furthermore, we demonstrate that practically any such definition can be realized using known techniques, as long as only a minority of the participants are corrupted. We then proceed to formulate universally composable definitions of a wide array of cryptographic tasks, including authenticated and secure communication, key-exchange, public-key encryption, signature, commitment, oblivious transfer, zero knowledge and more. We also make initial steps towards studying the realizability of the proposed definitions in various settings.
Computing with words in decision making: foundations, trends and prospects Computing with Words (CW) methodology has been used in several different environments to narrow the differences between human reasoning and computing. As Decision Making is a typical human mental process, it seems natural to apply the CW methodology in order to create and enrich decision models in which the information that is provided and manipulated has a qualitative nature. In this paper we make a review of the developments of CW in decision making. We begin with an overview of the CW methodology and we explore different linguistic computational models that have been applied to the decision making field. Then we present an historical perspective of CW in decision making by examining the pioneer papers in the field along with its most recent applications. Finally, some current trends, open questions and prospects in the topic are pointed out.
Genetic tuning of fuzzy rule deep structures preserving interpretability and its interaction with fuzzy rule set reduction Tuning fuzzy rule-based systems for linguistic fuzzy modeling is an interesting and widely developed task. It involves adjusting some of the components of the knowledge base without completely redefining it. This contribution introduces a genetic tuning process for jointly fitting the fuzzy rule symbolic representations and the meaning of the involved membership functions. To adjust the former component, we propose the use of linguistic hedges to perform slight modifications keeping a good interpretability. To alter the latter component, two different approaches changing their basic parameters and using nonlinear scaling factors are proposed. As the accomplished experimental study shows, the good performance of our proposal mainly lies in the consideration of this tuning approach performed at two different levels of significance. The paper also analyzes the interaction of the proposed tuning method with a fuzzy rule set reduction process. A good interpretability-accuracy tradeoff is obtained combining both processes with a sequential scheme: first reducing the rule set and subsequently tuning the model.
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
User impatience and network performance In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
Fuzzy modeling of system behavior for risk and reliability analysis The main objective of the article is to permit the reliability analyst's/engineers/managers/practitioners to analyze the failure behavior of a system in a more consistent and logical manner. To this effect, the authors propose a methodological and structured framework, which makes use of both qualitative and quantitative techniques for risk and reliability analysis of the system. The framework has been applied to model and analyze a complex industrial system from a paper mill. In the quantitative framework, after developing the Petrinet model of the system, the fuzzy synthesis of failure and repair data (using fuzzy arithmetic operations) has been done. Various system parameters of managerial importance such as repair time, failure rate, mean time between failures, availability, and expected number of failures are computed to quantify the behavior in terms of fuzzy, crisp and defuzzified values. Further, to improve upon the reliability and maintainability characteristics of the system, in depth qualitative analysis of systems is carried out using failure mode and effect analysis (FMEA) by listing out all possible failure modes, their causes and effect on system performance. To address the limitations of traditional FMEA method based on risky priority number score, a risk ranking approach based on fuzzy and Grey relational analysis is proposed to prioritize failure causes.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.056452
0.041806
0.028729
0.000557
0.000016
0.000003
0
0
0
0
0
0
0
0
Fundamentals Of Clinical Methodology: 2. Etiology The concept of etiology is analyzed and the possibilities and limitations of deterministic, probabilistic, and fuzzy etiology are explored. Different kinds of formal structures for the relation of causation are introduced which enable us to explicate the notion of cause on qualitative, comparative, and quantitative levels. The conceptual framework developed is an approach to a theory of causality that may be useful in etiologic research, in building nosological systems, and in differential diagnosis, therapeutic decision-making, and controlled clinical trials. The bearings of the theory are exemplified by examining the current Chlamydia pneumoniae hypothesis on the incidence of myocardial infarction. (C) 1998 Elsevier Science B.V. All rights reserved.
Fuzzy logic, neural networks and soft computing
Some Properties of Fuzzy Sets of Type 2
The concept of a linguistic variable and its application to approximate reasoning—I By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. For example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e.,young, not young, very young, quite young, old, not very old and not very young, etc., rather than 20, 21,22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (L>, T(L), U,G,M) in which L is the name of the variable; T(L) is the term-set of L, that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(L); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U. The meaning of a linguistic value X is characterized by a compatibility function, c: U → [0,1], which associates with each u in U its compatibility with X. Thus, the compatibility of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compatibilities of the so-called primary terms in a composite linguistic value-e.g., young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectives and and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The concept of a linguistic variable provides a means of approximate characterization of phenomena which are too complex or too ill-defined to be amenable to description in conventional quantitative terms. In particular, treating Truth as a linguistic variable with values such as true, very true, completely true, not very true, untrue, etc., leads to what is called fuzzy logic. By providing a basis for approximate reasoning, that is, a mode of reasoning which is not exact nor very inexact, such logic may offer a more realistic framework for human reasoning than the traditional two-valued logic. It is shown that probabilities, too, can be treated as linguistic variables with values such as likely, very likely, unlikely, etc. Computation with linguistic probabilities requires the solution of nonlinear programs and leads to results which are imprecise to the same degree as the underlying probabilities. The main applications of the linguistic approach lie in the realm of humanistic systems-especially in the fields of artificial intelligence, linguistics, human decision processes, pattern recognition, psychology, law, medical diagnosis, information retrieval, economics and related areas.
MPEG VBR video traffic modeling and classification using fuzzy technique We present an approach for MPEG variable bit rate (VBR) video modeling and classification using fuzzy techniques. We demonstrate that a type-2 fuzzy membership function, i.e., a Gaussian MF with uncertain variance, is most appropriate to model the log-value of I/P/B frame sizes in MPEG VBR video. The fuzzy c-means (FCM) method is used to obtain the mean and standard deviation (std) of T/P/B frame sizes when the frame category is unknown. We propose to use type-2 fuzzy logic classifiers (FLCs) to classify video traffic using compressed data. Five fuzzy classifiers and a Bayesian classifier are designed for video traffic classification, and the fuzzy classifiers are compared against the Bayesian classifier. Simulation results show that a type-2 fuzzy classifier in which the input is modeled as a type-2 fuzzy set and antecedent membership functions are modeled as type-2 fuzzy sets performs the best of the five classifiers when the testing video product is not included in the training products and a steepest descent algorithm is used to tune its parameters
Applications of type-2 fuzzy logic systems to forecasting of time-series In this paper, we begin with a type-1 fuzzy logic system (FLS), trained with noisy data, We then demonstrate how information about the noise in the training data can be incorporated into a type-2 FLS, which can be used to obtain bounds within which the true (noisefree) output is likely to lie. We do this with the example of a one-step predictor for the Mackey-Glass chaotic time-series [M.C, Mackey, L, Glass, Oscillation and chaos in physiological control systems, Science 197 (1977) 287-280], We also demonstrate how a type-2 .FLS can be used to obtain better predictions than those obtained with a type-1 FLS, (C) 1999 Elsevier Science Inc. All rights reserved.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
Relationships between entropy and similarity measure of interval-valued intuitionistic fuzzy sets The concept of entropy of interval-valued intuitionistic fuzzy set (IvIFS) is first introduced. The close relationships between entropy and the similarity measure of interval-valued intuitionistic fuzzy sets are discussed in detail. We also obtain some important theorems by which entropy and similarity measure of IvIFSs can be transformed into each other based on their axiomatic definitions. Simultaneously, some formulae to calculate entropy and similarity measure of IvIFSs are put forward. © 2010 Wiley Periodicals, Inc.
On the derivation of memberships for fuzzy sets in expert systems The membership function of a fuzzy set is the cornerstone upon which fuzzy set theory has evolved. The question of where these membership functions come from or how they are derived must be answered. Expert systems commonly deal with fuzzy sets and must use valid membership functions. This paper puts forth a method for constructing a membership function for the fuzzy sets that expert systems deal with. The function may be found by querying the appropriate group and using fuzzy statistics. The concept of a group is defined in this context, as well as a measure of goodness for a membership function. The commonality and differences between membership function for a fuzzy set and probabilistic functions are shown. The systematic methodology presented will facilitate effective use of expert systems.
An Approach To Interval-Valued R-Implications And Automorphisms The aim of this work is to introduce an approach for interval-valued R-implications, which satisfy some analogous properties of R-implications. We show that the best interval representation of an R-implication that is obtained from a left continuous t-norm coincides with the interval-valued R-implication obtained from the best interval representation of such t-norm, whenever this is an inclusion monotonic interval function. This provides, under this condition, a nice characterization for the best interval representation of an R-implication, which is also an interval-valued R-implication. We also introduce interval-valued automorphisms as the best interval representations of automorphisms. It is shown that interval automorphisms act on interval R-implications, generating other interval R-implications.
Numerical and symbolic approaches to uncertainty management in AI Dealing with uncertainty is part of most intelligent behaviour and therefore techniques for managing uncertainty are a critical step in producing intelligent behaviour in machines. This paper discusses the concept of uncertainty and approaches that have been devised for its management in AI and expert systems. These are classified as quantitative (numeric) (Bayesian methods, Mycin's Certainty Factor model, the Dempster-Shafer theory of evidence and Fuzzy Set theory) or symbolic techniques (Nonmonotonic/Default Logics, Cohen's theory of Endorsements, and Fox's semantic approach). Each is discussed, illustrated, and assessed in relation to various criteria which illustrate the relative advantages and disadvantages of each technique. The discussion summarizes some of the criteria relevant to selecting the most appropriate uncertainty management technique for a particular application, emphasizes the differing functionality of the approaches, and outlines directions for future research. This includes combining qualitative and quantitative representations of information within the same application to facilitate different kinds of uncertainty management and functionality.
A conceptual framework for fuzzy query processing—A step toward very intelligent database systems This paper is concerned with techniques for fuzzy query processing in a database system. By a fuzzy query we mean a query which uses imprecise or fuzzy predicates (e.g. AGE = “VERY YOUNG”, SALARY = “MORE OR LESS HIGH”, YEAR-OF-EMPLOYMENT = “RECENT”, SALARY ⪢ 20,000, etc.). As a basis for fuzzy query processing, a fuzzy retrieval system based on the theory of fuzzy sets and linguistic variables is introduced. In our system model, the first step in processing fuzzy queries consists of assigning meaning to fuzzy terms (linguistic values), of a term-set, used for the formulation of a query. The meaning of a fuzzy term is defined as a fuzzy set in a universe of discourse which contains the numerical values of a domain of a relation in the system database.
Compressed sensing for efficient random routing in multi-hop wireless sensor networks Compressed sensing (CS) is a novel theory based on the fact that certain signals can be recovered from a relatively small number of non-adaptive linear projections, when the original signals and the compression matrix own certain properties. In virtue of these advantages, compressed sensing, as a promising technique to deal with large amount of data, is attracting ever-increasing interests in the areas of wireless sensor networks where most of the sensing data are the same besides a few deviant ones. However, the applications of traditional CS in such settings are limited by the huge transport cost caused by dense measurement. To solve this problem, we propose several ameliorated random routing methods executed with sparse measurement based CS for efficient data gathering corresponding to different networking topologies in typical wireless sensor networking environment, and analyze the relevant performances comparing with those of the existing data gathering schemes, obtaining the conclusion that the proposed schemes are effective in signal reconstruction and efficient in reducing energy consumption cost by routing. Our proposed schemes are also available in heterogeneous networks, for the data to be dealt with in CS are not necessarily homogeneous.
A Machine Learning Approach to Personal Pronoun Resolution in Turkish.
1.20677
0.009413
0.001966
0.001697
0.000855
0.000287
0.0001
0.000036
0.000016
0.000007
0
0
0
0
Compressed sensing with probabilistic measurements: a group testing solution Detection of defective members of large populations has been widely studied in the statistics community under the name ¿group testing¿, a problem which dates back to World War II when it was suggested for syphilis screening. There, the main interest is to identify a small number of infected people among a large population using collective samples. In viral epidemics, one way to acquire collective samples is by sending agents inside the population. While in classical group testing, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in this work we assume that the decoder possesses only partial knowledge about the sampling process. This assumption is justified by observing the fact that in a viral sickness, there is a chance that an agent remains healthy despite having contact with an infected person. Therefore, the reconstruction method has to cope with two different types of uncertainty; namely, identification of the infected population and the partially unknown sampling procedure. In this work, by using a natural probabilistic model for ¿viral infections¿, we design non-adaptive sampling procedures that allow successful identification of the infected population with overwhelming probability 1 - o(1). We propose both probabilistic and explicit design procedures that require a ¿small¿ number of agents to single out the infected individuals. More precisely, for a contamination probability p, the number of agents required by the probabilistic and explicit designs for identification of up to k infected members is bounded by m = O(k2(log n)/p2) and m = O(k2 (log2 n)/p2), respectively. In both cases, a simple decoder is able to successfully identify the infected population in time O(mn).
LDPC Codes for Compressed Sensing We present a mathematical connection between channel coding and compressed sensing. In particular, we link, on the one hand, channel coding linear programming decoding (CC-LPD), which is a well-known relaxation of maximum-likelihood channel decoding for binary linear codes, and, on the other hand, compressed sensing linear programming decoding (CS-LPD), also known as basis pursuit, which is a widely used linear programming relaxation for the problem of finding the sparsest solution of an underdetermined system of linear equations. More specifically, we establish a tight connection between CS-LPD based on a zero-one measurement matrix over the reals and CC-LPD of the binary linear channel code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows the translation of performance guarantees from one setup to the other. The main message of this paper is that parity-check matrices of “good” channel codes can be used as provably “good” measurement matrices under basis pursuit. In particular, we provide the first deterministic construction of compressed sensing measurement matrices with an order-optimal number of rows using high-girth low-density parity-check codes constructed by Gallager.
Compressed Genotyping. Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the 'traditional' compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting.
Sparse Event Detection In Wireless Sensor Networks Using Compressive Sensing Compressive sensing is a revolutionary idea proposed recently to achieve much lower sampling rate for sparse signals. For large wireless sensor networks, the events are relatively sparse compared with the number of sources. Because of deployment cost, the number of sensors is limited, and due to energy constraint, not all the sensors are turned on all the time. In this paper, the first contribution is to formulate the problem for sparse event detection in wireless sensor networks as a compressive sensing problem. The number of (wake-up) sensors can be greatly reduced to the similar level of the number of sparse events, which is much smaller than the total number of sources. Second, we suppose the event has the binary nature, and employ the Bayesian detection using this prior information. Finally, we analyze the performance of the compressive sensing algorithms under the Gaussian noise. From the simulation results, we show that the sampling rate can reduce to 25% without sacrificing performance. With further decreasing the sampling rate, the performance is gradually reduced until 10% of sampling rate. Our proposed detection algorithm has much better performance than the L-1-magic algorithm proposed in the literature.
An Interior-Point Method For Large-Scale L(1)-Regularized Least Squares Recently, a lot of attention has been paid to l(1) regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as l(1)-regularized least-squares programs (LSPs), which can be reformulated as convex quadratic programs, and then solved by several standard methods such as interior-point methods, at least for small and medium size problems. In this paper, we describe a specialized interior-point method for solving large-scale, l(1)-regularized LSPs that uses the preconditioned conjugate gradients algorithm to compute the search direction. The interior-point method can solve large sparse problems, with a million variables and observations, in a few tens of minutes on a PC. It can efficiently solve large dense problems, that arise in sparse signal recovery with orthogonal transforms, by exploiting fast algorithms for these transforms. The method is illustrated on a magnetic resonance imaging data set.
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
MapReduce: simplified data processing on large clusters MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Numerical Integration using Sparse Grids We present and review algorithms for the numerical integration of multivariatefunctions defined over d--dimensional cubes using several variantsof the sparse grid method first introduced by Smolyak [51]. In this approach,multivariate quadrature formulas are constructed using combinationsof tensor products of suited one--dimensional formulas. The computingcost is almost independent of the dimension of the problem if thefunction under consideration has bounded mixed derivatives. We suggest...
Coding Algorithms for 3DTV—A Survey Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development o- f algorithms.
Robust Regression and Lasso Lasso, or l1 regularized least squares, has been explored extensively for its remarkable sparsity properties. In this paper it is shown that the solution to Lasso, in addition to its sparsity, has robustness properties: it is the solution to a robust optimization problem. This has two important consequences. First, robustness provides a connection of the regularizer to a physical property, namely, protection from noise. This allows a principled selection of the regularizer, and in particular, generalizations of Lasso that also yield convex optimization problems are obtained by considering different uncertainty sets. Second, robustness can itself be used as an avenue for exploring different properties of the solution. In particular, it is shown that robustness of the solution explains why the solution is sparse. The analysis as well as the specific results obtained differ from standard sparsity results, providing different geometric intuition. Furthermore, it is shown that the robust optimization formulation is related to kernel density estimation, and based on this approach, a proof that Lasso is consistent is given, using robustness directly. Finally, a theorem is proved which states that sparsity and algorithmic stability contradict each other, and hence Lasso is not stable.
Opposites and Measures of Extremism in Concepts and Constructs We discuss the distinction between different types of opposites, i.e. negation and antonym, in terms of their representation by fuzzy subsets. The idea of a construct in terms of Kelly's theory of personal construct is discussed. A measure of the extremism of a group of elements with respect to concept and its negation, and with respect to a concept and its antonym is introduced.
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.1
0.05
0.025
0.02
0.001389
0
0
0
0
0
0
0
0
0
Matrix Completion from a Few Entries Let M be an nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(r n) observed entries with relative root mean square error
A review on spectrum sensing for cognitive radio: challenges and solutions Cognitive radio is widely expected to be the next Big Bang in wireless communications. Spectrum sensing, that is, detecting the presence of the primary users in a licensed spectrum, is a fundamental problem for cognitive radio. As a result, spectrum sensing has reborn as a very active research area in recent years despite its long history. In this paper, spectrum sensing techniques from the optimal likelihood ratio test to energy detection, matched filtering detection, cyclostationary detection, eigenvalue-based sensing, joint space-time sensing, and robust sensing methods are reviewed. Cooperative spectrum sensing with multiple receivers is also discussed. Special attention is paid to sensing methods that need little prior information on the source signal and the propagation channel. Practical challenges such as noise power uncertainty are discussed and possible solutions are provided. Theoretical analysis on the test statistic distribution and threshold setting is also investigated.
Rank-Constrained Solutions to Linear Matrix Equations Using PowerFactorization. Algorithms to construct/recover low-rank matrices satisfying a set of linear equality constraints have important applications in many signal processing contexts. Recently, theoretical guarantees for minimum-rank matrix recovery have been proven for nuclear norm minimization (NNM), which can be solved using standard convex optimization approaches. While nuclear norm minimization is effective, it ca...
An implementable proximal point algorithmic framework for nuclear norm minimization The nuclear norm minimization problem is to find a matrix with the minimum nuclear norm subject to linear and second order cone constraints. Such a problem often arises from the convex relaxation of a rank minimization problem with noisy data, and arises in many fields of engineering and science. In this paper, we study inexact proximal point algorithms in the primal, dual and primal-dual forms for solving the nuclear norm minimization with linear equality and second order cone constraints. We design efficient implementations of these algorithms and present comprehensive convergence results. In particular, we investigate the performance of our proposed algorithms in which the inner sub-problems are approximately solved by the gradient projection method or the accelerated proximal gradient method. Our numerical results for solving randomly generated matrix completion problems and real matrix completion problems show that our algorithms perform favorably in comparison to several recently proposed state-of-the-art algorithms. Interestingly, our proposed algorithms are connected with other algorithms that have been studied in the literature.
Fixed point and Bregman iterative methods for matrix rank minimization The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htmfor non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10−5 in about 3 min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.
Nuclear norm minimization for the planted clique and biclique problems We consider the problems of finding a maximum clique in a graph and finding a maximum-edge biclique in a bipartite graph. Both problems are NP-hard. We write both problems as matrix-rank minimization and then relax them using the nuclear norm. This technique, which may be regarded as a generalization of compressive sensing, has recently been shown to be an effective way to solve rank optimization problems. In the special case that the input graph has a planted clique or biclique (i.e., a single large clique or biclique plus diversionary edges), our algorithm successfully provides an exact solution to the original instance. For each problem, we provide two analyses of when our algorithm succeeds. In the first analysis, the diversionary edges are placed by an adversary. In the second, they are placed at random. In the case of random edges for the planted clique problem, we obtain the same bound as Alon, Krivelevich and Sudakov as well as Feige and Krauthgamer, but we use different techniques.
Model Reduction and Simulation of Nonlinear Circuits via Tensor Decomposition Model order reduction of nonlinear circuits (especially highly nonlinear circuits), has always been a theoretically and numerically challenging task. In this paper we utilize tensors (namely, a higher order generalization of matrices) to develop a tensor-based nonlinear model order reduction (TNMOR) algorithm for the efficient simulation of nonlinear circuits. Unlike existing nonlinear model order reduction methods, in TNMOR high-order nonlinearities are captured using tensors, followed by decomposition and reduction to a compact tensor-based reducedorder model. Therefore, TNMOR completely avoids the dense reduced-order system matrices, which in turn allows faster simulation and a smaller memory requirement if relatively lowrank approximations of these tensors exist. Numerical experiments on transient and periodic steady-state analyses confirm the superior accuracy and efficiency of TNMOR, particularly in highly nonlinear scenarios.
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Remembrance of Transistors Past: Compact Model Parameter Extraction Using Bayesian Inference and Incomplete New Measurements In this paper, we propose a novel MOSFET parameter extraction method to enable early technology evaluation. The distinguishing feature of the proposed method is that it enables the extraction of an entire set of MOSFET model parameters using limited and incomplete IV measurements from on-chip monitor circuits. An important step in this method is the use of maximum-a-posteriori estimation where past measurements of transistors from various technologies are used to learn a prior distribution and its uncertainty matrix for the parameters of the target technology. The framework then utilizes Bayesian inference to facilitate extraction using a very small set of additional measurements. The proposed method is validated using various past technologies and post-silicon measurements for a commercial 28-nm process. The proposed extraction could also be used to characterize the statistical variations of MOSFETs with the significant benefit that some constraints required by the backward propagation of variance (BPV) method are relaxed.
A lower estimate for entropy numbers The behaviour of the entropy numbers ek(id:lnp→lnq), 0<p<q⩽∞, is well known (up to multiplicative constants independent of n and k), except in the quasi-Banach case 0<p<1 for “medium size” k, i.e., when logn⩽k⩽n, where only an upper estimate is available so far. We close this gap by proving the lower estimate ek(id:lnp→lnq)⩾c(log(n/k+1)/k)1/p−1/q for all 0<p<q⩽∞ and logn⩽k⩽n, with some constant c>0 depending only on p.
An adaptive high-dimensional stochastic model representation technique for the solution of stochastic partial differential equations A computational methodology is developed to address the solution of high-dimensional stochastic problems. It utilizes high-dimensional model representation (HDMR) technique in the stochastic space to represent the model output as a finite hierarchical correlated function expansion in terms of the stochastic inputs starting from lower-order to higher-order component functions. HDMR is efficient at capturing the high-dimensional input-output relationship such that the behavior for many physical systems can be modeled to good accuracy only by the first few lower-order terms. An adaptive version of HDMR is also developed to automatically detect the important dimensions and construct higher-order terms using only the important dimensions. The newly developed adaptive sparse grid collocation (ASGC) method is incorporated into HDMR to solve the resulting sub-problems. By integrating HDMR and ASGC, it is computationally possible to construct a low-dimensional stochastic reduced-order model of the high-dimensional stochastic problem and easily perform various statistic analysis on the output. Several numerical examples involving elementary mathematical functions and fluid mechanics problems are considered to illustrate the proposed method. The cases examined show that the method provides accurate results for stochastic dimensionality as high as 500 even with large-input variability. The efficiency of the proposed method is examined by comparing with Monte Carlo (MC) simulation.
A new version of 2-tuple fuzzy linguistic representation model for computing with words In this paper, we provide a new (proportional) 2-tuple fuzzy linguistic representation model for computing with words (CW), which is based on the concept of "symbolic proportion." This concept motivates us to represent the linguistic information by means of 2-tuples, which are composed by two proportional linguistic terms. For clarity and generality, we first study proportional 2-tuples under ordinal contexts. Then, under linguistic contexts and based on canonical characteristic values (CCVs) of linguistic labels, we define many aggregation operators to handle proportional 2-tuple linguistic information in a computational stage for CW without any loss of information. Our approach for this proportional 2-tuple fuzzy linguistic representation model deals with linguistic labels, which do not have to be symmetrically distributed around a medium label and without the traditional requirement of having "equal distance" between them. Moreover, this new model not only provides a space to allow a "continuous" interpolation of a sequence of ordered linguistic labels, but also provides an opportunity to describe the initial linguistic information by members of a "continuous" linguistic scale domain which does not necessarily require the ordered linguistic terms of a linguistic variable being equidistant. Meanwhile, under the assumption of equally informative (which is defined by a condition based on the concept of CCV), we show that our model reduces to Herrera and Mart&inodot;´nez's (translational) 2-tuple fuzzy linguistic representation model.
Perfect Baer Subplane Partitions and Three-Dimensional Flag-Transitive Planes The classification of perfectBaer subplane partitions of PG(2, q2) is equivalentto the classification of 3-dimensional flag-transitive planeswhose translation complements contain a linear cyclic group actingregularly on the line at infinity. Since all known flag-transitiveplanes admit a translation complement containing a linear cyclicsubgroup which either acts regularly on the points of the lineat infinity or has two orbits of equal size on these points,such a classification would be a significant step towards theclassification of all 3-dimensional flag-transitive planes. Usinglinearized polynomials, a parametric enumeration of all perfectBaer subplane partitions for odd q is described.Moreover, a cyclotomic conjecture is given, verified by computerfor odd prime powers q < 200, whose truth would implythat all perfect Baer subplane partitions arise from a constructionof Kantor and hence the corresponding flag-transitive planesare all known.
A Machine Learning Approach to Personal Pronoun Resolution in Turkish.
1.021977
0.024242
0.018182
0.009567
0.00347
0.0008
0.000295
0.000115
0.000036
0.000008
0
0
0
0
Advanced partitioning techniques for massively distributed computation An increasing number of companies rely on distributed data storage and processing over large clusters of commodity machines for critical business decisions. Although plain MapReduce systems provide several benefits, they carry certain limitations that impact developer productivity and optimization opportunities. Higher level programming languages plus conceptual data models have recently emerged to address such limitations. These languages offer a single machine programming abstraction and are able to perform sophisticated query optimization and apply efficient execution strategies. In massively distributed computation, data shuffling is typically the most expensive operation and can lead to serious performance bottlenecks if not done properly. An important optimization opportunity in this environment is that of judicious placement of repartitioning operators and choice of alternative implementations. In this paper we discuss advanced partitioning strategies, their implementation, and how they are integrated in the Microsoft Scope system. We show experimentally that our approach significantly improves performance for a large class of real-world jobs.
IBM infosphere streams for scalable, real-time, intelligent transportation services With the widespread adoption of location tracking technologies like GPS, the domain of intelligent transportation services has seen growing interest in the last few years. Services in this domain make use of real-time location-based data from a variety of sources, combine this data with static location-based data such as maps and points of interest databases, and provide useful information to end-users. Some of the major challenges in this domain include i) scalability, in terms of processing large volumes of real-time and static data; ii) extensibility, in terms of being able to add new kinds of analyses on the data rapidly, and iii) user interaction, in terms of being able to support different kinds of one-time and continuous queries from the end-user. In this paper, we demonstrate the use of IBM InfoSphere Streams, a scalable stream processing platform, for tackling these challenges. We describe a prototype system that generates dynamic, multi-faceted views of transportation information for the city of Stockholm, using real vehicle GPS and road-network data. The system also continuously derives current traffic statistics, and provides useful value-added information such as shortest-time routes from real-time observed and inferred traffic conditions. Our performance experiments illustrate the scalability of the system. For instance, our system can process over 120000 incoming GPS points per second, combine it with a map containing over 600,000 links, continuously generate different kinds of traffic statistics and answer user queries.
Adaptive Stream Processing using Dynamic Batch Sizing The need for real-time processing of \"big data\" has led to the development of frameworks for distributed stream processing in clusters. It is important for such frameworks to be robust against variable operating conditions such as server failures, changes in data ingestion rates, and workload characteristics. To provide fault tolerance and efficient stream processing at scale, recent stream processing frameworks have proposed to treat streaming workloads as a series of batch jobs on small batches of streaming data. However, the robustness of such frameworks against variable operating conditions has not been explored. In this paper, we explore the effects of the batch size on the performance of streaming workloads. The throughput and end-to-end latency of the system can have complicated relationships with batch sizes, data ingestion rates, variations in available resources, workload characteristics, etc. We propose a simple yet robust control algorithm that automatically adapts the batch size as the situation necessitates. We show through extensive experiments that it can ensure system stability and low latency for a wide range of workloads, despite large variations in data rates and operating conditions.
Storm@twitter This paper describes the use of Storm at Twitter. Storm is a real-time fault-tolerant and distributed stream data processing system. Storm is currently being used to run various critical computations in Twitter at scale, and in real-time. This paper describes the architecture of Storm and its methods for distributed scale-out and fault-tolerance. This paper also describes how queries (aka. topologies) are executed in Storm, and presents some operational stories based on running Storm at Twitter. We also present results from an empirical evaluation demonstrating the resilience of Storm in dealing with machine failures. Storm is under active development at Twitter and we also present some potential directions for future work.
Fault injection-based assessment of partial fault tolerance in stream processing applications This paper describes an experimental methodology used to evaluate the effectiveness of partial fault tolerance (PFT) techniques in data stream processing applications. Without a clear understanding of the impact of faults on the quality of the application output, applying PFT techniques in practice is not viable. We assess the impact of PFT by injecting faults into a synthetic financial engineering application running on top of IBM's stream processing middleware, System S. The application output quality degradation is evaluated via an application-specific output score function. In addition, we propose four metrics that are aimed at assessing the impact of faults in different stream operators of the application flow graph with respect to predictability and availability. These metrics help the developer to decide where in the application he should place redundant resources. We show that PFT is indeed viable, which opens the way for considerably reducing the resource consumption when compared to fully consistent replicas.
A latency and fault-tolerance optimizer for online parallel query plans We address the problem of making online, parallel query plans fault-tolerant: i.e., provide intra-query fault-tolerance without blocking. We develop an approach that not only achieves this goal but does so through the use of different fault-tolerance techniques at different operators within a query plan. Enabling each operator to use a different fault-tolerance strategy leads to a space of fault-tolerance plans amenable to cost-based optimization. We develop FTOpt, a cost-based fault-tolerance optimizer that automatically selects the best strategy for each operator in a query plan in a manner that minimizes the expected processing time with failures for the entire query. We implement our approach in a prototype parallel query-processing engine. Our experiments demonstrate that (1) there is no single best fault-tolerance strategy for all query plans, (2) often hybrid strategies that mix-and-match recovery techniques outperform any uniform strategy, and (3) our optimizer correctly identifies winning fault-tolerance configurations.
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
A compressed sensing approach for biological microscopic image processing In fluorescence microscopy the noise level and the photobleaching are cross-dependent problems since reducing exposure time to reduce photobleaching degrades image quality while increasing noise level. These two problems cannot be solved independently as a post-processing task, hence the most important contribution in this work is to a-priori denoise and reduce photobleaching simultaneously by using the Compressed Sensing framework (CS). In this paper, we propose a CS-based denoising framework, based on statistical properties of the CS optimality, noise reconstruction characteristics and signal modeling applied to microscopy images with low signal-tonoise ratio (SNR). Our approach has several advantages over traditional denoising methods, since it can under-sample, recover and denoise images simultaneously. We demonstrate with simulated and practical experiments on fluorescence image data that thanks to CS denoising we can obtain images with similar or increased SNR while still being able to reduce exposition times.
Managing incomplete preference relations in decision making: A review and future trends.
A framework for understanding human factors in web-based electronic commerce The World Wide Web and email are used increasingly for purchasing and selling products. The use of the internet for these functions represents a significant departure from the standard range of information retrieval and communication tasks for which it has most often been used. Electronic commerce should not be assumed to be information retrieval, it is a separate task-domain, and the software systems that support it should be designed from the perspective of its goals and constraints. At present there are many different approaches to the problem of how to support seller and buyer goals using the internet. They range from standard, hierarchically arranged, hyperlink pages to “electronic sales assistants”, and from text-based pages to 3D virtual environments. In this paper, we briefly introduce the electronic commerce task from the perspective of the buyer, and then review and analyse the technologies. A framework is then proposed to describe the design dimensions of electronic commerce. We illustrate how this framework may be used to generate additional, hypothetical technologies that may be worth further exploration.
A fast approach for overcomplete sparse decomposition based on smoothed l0 norm In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include under-determined sparse component analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the l1 norm using linear programming (LP) techniques, our algorithm tries to directly minimize the l0 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.
Accurate and efficient gate-level parametric yield estimation considering correlated variations in leakage power and performance Increasing levels of process variation in current technologies have a major impact on power and performance, and result in parametric yield loss. In this work we develop an efficient gate-level approach to accurately estimate the parametric yield defined by leakage power and delay constraints, by finding the joint probability distribution function (jpdf) for delay and leakage power. We consider inter-die variations as well as intra-die variations with correlated and random components. The correlation between power and performance arise due to their dependence on common process parameters and is shown to have a significant impact on yield in high-frequency bins. We also propose a method to estimate parametric yield given the power/delay jpdf that is much faster than numerical integration with good accuracy. The proposed approach is implemented and compared with Monte Carlo simulations and shows high accuracy, with the yield estimates achieving an average error of 2%.
Interactive group decision-making using a fuzzy linguistic approach for evaluating the flexibility in a supply chain ► This study builds a group decision-making structure model of flexibility in supply chain management development. ► This study presents a framework for evaluating supply chain flexibility. ► This study proposes an algorithm for determining the degree of supply chain flexibility using a new fuzzy linguistic approach. ►This fuzzy linguistic approach has more advantage to preserve no loss of information.
Performance and Quality Evaluation of a Personalized Route Planning System Advanced personalization of database applications is a big challenge, in particular for distributed mo- bile environments. We present several new results from a prototype of a route planning system. We demonstrate how to combine qualitative and quantitative preferences gained from situational aspects and from personal user preferences. For performance studies we a nalyze the runtime efficiency of the SR-Combine algorithm used to evaluate top-k queries. By determining the cost-ratio of random to sorted accesses SR-Combine can automati- cally tune its performance within the given system architecture. Top-k queries are generated by mapping linguis- tic variables to numerical weightings. Moreover, we analyze the quality of the query results by several test se- ries, systematically varying the mappings of the linguistic variables. We report interesting insights into this rather under-researched important topic. More investigations, incorporating also cognitive issues, need to be conducted in the future.
1.205319
0.205319
0.205319
0.105159
0.069581
0.035939
0
0
0
0
0
0
0
0
Modeling Fuzzy DEA with Type-2 Fuzzy Variable Coefficients Data envelopment analysis (DEA) is an effective method for measuring the relative efficiency of a set of homogeneous decision-making units (DMUs). However, the data in traditional DEA model are limited to crisp inputs and outputs, which cannot be precisely obtained in many production processes or social activities. This paper attempts to extend the traditional DEA model and establishes a DEA model with type-2 (T2) fuzzy inputs and outputs. To establish this model, we first propose a reduction method for T2 fuzzy variables based on the expected value of fuzzy variable. After that, we establish a DEA model with the obtained fuzzy variables. In some special cases such as the inputs and outputs are independent T2 triangular fuzzy variables, we provide a method to turn the original DEA model to its equivalent one. At last, we provide a numerical example to illustrate the efficiency of the proposed DEA model.
Type-2 Fuzzy Soft Sets and Their Applications in Decision Making. Molodtsov introduced the theory of soft sets, which can be used as a general mathematical tool for dealing with uncertainty. This paper aims to introduce the concept of the type-2 fuzzy soft set by integrating the type-2 fuzzy set theory and the soft set theory. Some operations on the type-2 fuzzy soft sets are given. Furthermore, we investigate the decision making based on type-2 fuzzy soft sets. By means of level soft sets, we propose an adjustable approach to type-2 fuzzy-soft-set based decision making and give some illustrative examples. Moreover, we also introduce the weighted type-2 fuzzy soft set and examine its application to decision making.
Multi-Criteria And Multi-Stage Facility Location Selection Under Interval Type-2 Fuzzy Environment: A Case Study For A Cement Factory The study proposes a comprehensive and systematic approach for multi-criteria and multi-stage facility location selection problem. To handle with high and more uncertainty in the evaluation and selection processes, the problem is solved by using multi-criteria decision making technique with interval Type-2 fuzzy sets. The study contributes the facility location selection literature by introducing the application of fuzzy TOPSIS method with interval Type-2 fuzzy sets. Finally, the suggested approach is applied to a real life region and site selection problem of a cement factory.
An interval type-2 fuzzy extension of the TOPSIS method using alpha cuts technique for order of preference by similarity to ideal solution (TOPSIS) currently is probably one of most popular method for Multiple Criteria Decision Making (MCDM). The method was primarily developed for dealing with real-valued data. Nevertheless, in practice often it is hard to present precisely exact ratings of alternatives with respect to local criteria and as a result these ratings are presented by as fuzzy values. Many recent papers have been devoted to the fuzzy extension of the TOPSIS method, but only a few works provided the type-2 fuzzy extensions, whereas such extensions seem to be very useful for the solution of many real-world problems, e.g., Multiple Criteria Group Decision Making problem. Since the proposed type-2 fuzzy extensions of the TOPSIS method have some limitations and drawbacks, in this paper we propose an interval type-2 fuzzy extension of the TOPSIS method realized with the use of α-cuts representation of the interval type-2 fuzzy values ( IT 2 FV ). This extension is free of the limitations of the known methods. The proposed method is realized for the cases of perfectly normal and normal IT 2 FV s. The illustrative examples are presented to show the features of the proposed method.
Strategic Decision Selection Using Hesitant Fuzzy Topsis And Interval Type-2 Fuzzy Ahp: A Case Study Strategic decisions such as mergers, acquisitions and joint ventures have a strong effect on firm performance. In order to be successful in highly competitive environments firms have to make right and on time strategic decisions. However, the nature of making the right strategic decision is complex and unstructured since there are many factors affecting such decisions. Moreover these factors are usually hard and vague to evaluate numerically. This study tries to develop a multicriteria decision-making model which considers both the complexity and vagueness of strategic decisions. The weights of the factors are determined by interval type-2 Fuzzy Analytic Hierarchy Process (AHP) and then the best strategy is selected by Hesitant Fuzzy TOPSIS using the determined weights. An application to a multinational consumer electronics company is presented.
Technology evaluation through the use of interval type-2 fuzzy sets and systems Even though fuzzy logic is one of the most common methodologies for matching different kind of data sources, there is no study which uses this methodology for matching publication and patent data within a technology evaluation framework according to the authors' best knowledge. In order to fill this gap and to demonstrate the usefulness of fuzzy logic in technology evaluation, this study proposes a novel technology evaluation framework based on an advanced/improved version of fuzzy logic, namely; interval type-2 fuzzy sets and systems (IT2FSSs). This framework uses patent data obtained from the European Patent Office (EPO) and publication data obtained from Web of Science/Knowledge (WoS/K) to evaluate technology groups with respect to their trendiness. Since it has been decided to target technology groups, patent and publication data sources are matched through the use IT2FSSs. The proposed framework enables us to make a strategic evaluation which directs considerations to use-inspired basic researches, hence achieving science-based technological improvements which are more beneficial for society. A European Classification System (ECLA) class - H01-Basic Electric Elements - is evaluated by means of the proposed framework in order to demonstrate how it works. The influence of the use of IT2FSSs is investigated by comparison with the results of its type-1 counterpart. This method shows that the use of type-2 fuzzy sets, i.e. handling more uncertainty, improves technology evaluation outcomes.
The extended QUALIFLEX method for multiple criteria decision analysis based on interval type-2 fuzzy sets and applications to medical decision making. QUALIFLEX, a generalization of Jacquet-Lagreze's permutation method, is a useful outranking method in decision analysis because of its flexibility with respect to cardinal and ordinal information. This paper develops an extended QUALIFLEX method for handling multiple criteria decision-making problems in the context of interval type-2 fuzzy sets. Interval type-2 fuzzy sets contain membership values that are crisp intervals, which are the most widely used of the higher order fuzzy sets because of their relative simplicity. Using the linguistic rating system converted into interval type-2 trapezoidal fuzzy numbers, the extended QUALIFLEX method investigates all possible permutations of the alternatives with respect to the level of concordance of the complete preference order. Based on a signed distance-based approach, this paper proposes the concordance/discordance index, the weighted concordance/discordance index, and the comprehensive concordance/discordance index as evaluative criteria of the chosen hypothesis for ranking the alternatives. The feasibility and applicability of the proposed methods are illustrated by a medical decision-making problem concerning acute inflammatory demyelinating disease, and a comparative analysis with another outranking approach is conducted to validate the effectiveness of the proposed methodology. (C) 2012 Elsevier B.V. All rights reserved.
Fuzzy logic in control systems: fuzzy logic controller. I.
A machine learning approach to coreference resolution of noun phrases In this paper, we present a learning approach to coreference resolution of noun phrases in unrestricted text. The approach learns from a small, annotated corpus and the task includes resolving not just a certain type of noun phrase (e.g., pronouns) but rather general noun phrases. It also does not restrict the entity types of the noun phrases; that is, coreference is assigned whether they are of "organization," "person," or other types. We evaluate our approach on common data sets (namely, the MUC-6 and MUC-7 coreference corpora) and obtain encouraging results, in-dicating that on the general noun phrase coreference task, the learning approach holds promise and achieves accuracy comparable to that of nonlearning approaches. Our system is the first learning-based system that offers performance comparable to that of state-of-the-art nonlearning systems on these data sets.
Statistical leakage estimation based on sequential addition of cell leakage currents This paper presents a novel method for full-chip statistical leakage estimation that considers the impact of process variation. The proposed method considers the correlations among leakage currents in a chip and the state dependence of the leakage current of a cell for an accurate analysis. For an efficient addition of the cell leakage currents, we propose the virtual-cell approximation (VCA), which sums cell leakage currents sequentially by approximating their sum as the leakage current of a single virtual cell while preserving the correlations among leakage currents. By the use of the VCA, the proposed method efficiently calculates a full-chip leakage current. Experimental results using ISCAS benchmarks at various process variation levels showed that the proposed method provides an accurate result by demonstrating average leakage mean and standard deviation errors of 3.12% and 2.22%, respectively, when compared with the results of a Monte Carlo (MC) simulation-based leakage estimation. In efficiency, the proposed method also demonstrated to be 5000 times faster than MC simulation-based leakage estimations and 9000 times faster than the Wilkinson's method-based leakage estimation.
A note on compressed sensing and the complexity of matrix multiplication We consider the conjectured O(N^2^+^@e) time complexity of multiplying any two NxN matrices A and B. Our main result is a deterministic Compressed Sensing (CS) algorithm that both rapidly and accurately computes A@?B provided that the resulting matrix product is sparse/compressible. As a consequence of our main result we increase the class of matrices A, for any given NxN matrix B, which allows the exact computation of A@?B to be carried out using the conjectured O(N^2^+^@e) operations. Additionally, in the process of developing our matrix multiplication procedure, we present a modified version of Indyk's recently proposed extractor-based CS algorithm [P. Indyk, Explicit constructions for compressed sensing of sparse signals, in: SODA, 2008] which is resilient to noise.
An approach based on Takagi-Sugeno Fuzzy Inference System applied to the operation planning of hydrothermal systems The operation planning in hydrothermal systems with great hydraulic participation, as it is the case of Brazilian system, seeks to determine an operation policy to specify how hydroelectric plants should be operated, in order to use the hydroelectric resources economically and reliably. This paper presents an application of Takagi-Sugeno Fuzzy Inference Systems to obtain an operation policy (PBFIS Policy Based on Fuzzy Inference Systems) that follows the principles of the optimized operation of reservoirs for electric power generation. PBFIS is obtained through the application of an optimization algorithm for the operation of hydroelectric plants. From this optimization the relationships between the stored energy of the system and the volume of the reservoir of each plant are extracted. These relationships are represented in the consequent parameters of the fuzzy linguistic rules. Thus, PBFIS is used to estimate the operative volume of each hydroelectric plant, based on the value of the energy stored in the system. In order to verify the effectiveness of PBFIS, a computer simulation model of the operation of hydroelectric plants was used so as to compare it with the operation policy in parallel; with the operation policy based on functional approximations; and also with the result obtained through the application of the optimization of individualized plants' operation. With the proposed methodology, we try to demonstrate the viability of PBFIS' obtainment and application, and with the obtained results, we intend to illustrate the effectiveness and the gains which came from it.
Fuzzy Bayesian system reliability assessment based on prior two-parameter exponential distribution under different loss functions The fuzzy Bayesian system reliability assessment based on prior two-parameter exponential distribution under squared error symmetric loss function and precautionary asymmetric loss function is proposed in this paper. In order to apply the Bayesian approach, the fuzzy parameters are assumed as fuzzy random variables with fuzzy prior distributions. Because the goal of the paper is to obtain fuzzy Bayes point estimators of system reliability assessment, prior distributions of location-scale family has been changed to scale family with change variable. On the other hand, also the computational procedures to evaluate the membership degree of any given Bayes point estimate of system reliability have been provided. In order to achieve this purpose, we transform the original problem into a non-linear programming problem. This non-linear programming problem is then divided into four sub-problems for the purpose of simplifying computation. Finally, the sub-problems can be solved by using any commercial optimizers, e.g. GAMS or LINGO. Copyright © 2010 John Wiley & Sons, Ltd.
Fuzzy OWA model for information security risk management One of the methods for information security risk assessment is the substantiated choice and realization of countermeasures against threats. A situational fuzzy OWA model of a multicriteria decision making problem concerning the choice of countermeasures for reducing information security risks is proposed. The proposed model makes it possible to modify the associated weights of criteria based on the information entropy with respect to the aggregation situation. The advantage of the model is the continuous improvement of the weights of the criteria and the aggregation of experts’ opinions depending on the parameter characterizing the aggregation situation.
1.111111
0.066667
0.066667
0.066667
0.033333
0.016667
0.002778
0
0
0
0
0
0
0
Sparse Recovery From Combined Fusion Frame Measurements Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead of vectors to represent signals. This work combines these exciting fields to introduce a new sparsity model for fusion frames. Signals that are sparse under the new model can be compressively sampled and uniquely reconstructed in ways similar to sparse signals using standard CS. The combination provides a promising new set of mathematical tools and signal models useful in a variety of applications. With the new model, a sparse signal has energy in very few of the subspaces of the fusion frame, although it does not need to be sparse within each of the subspaces it occupies. This sparsity model is captured using a mixed l1/l2 norm for fusion frames. A signal sparse in a fusion frame can be sampled using very few random projections and exactly reconstructed using a convex optimization that minimizes this mixed l1/l2 norm. The provided sampling conditions generalize coherence and RIP conditions used in standard CS theory. It is demonstrated that they are sufficient to guarantee sparse recovery of any signal sparse in our model. Moreover, an average case analysis is provided using a probability model on the sparse signal that shows that under very mild conditions the probability of recovery failure decays exponentially with increasing dimension of the subspaces.
Sparsity in time-frequency representations We consider signals and operators in finite dimension which have sparse time-frequency representations. As main result we show that an S-sparse Gabor representation in ℂ n with respect to a random unimodular window can be recovered by Basis Pursuit with high probability provided that S≤Cn/log (n). Our results are applicable to the channel estimation problem in wireless communications and they establish the usefulness of a class of measurement matrices for compressive sensing.
Block-Sparse Signals: Uncertainty Relations And Efficient Recovery We consider efficient methods for the recovery of block-sparse signals-i.e., sparse signals that have nonzero entries occurring in clusters-from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed l(2)/l(1)-optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
Block-sparsity: Coherence and efficient recovery We consider compressed sensing of block-sparse signals, i.e., sparse signals that have nonzero coefficients occurring in clusters. Based on an uncertainty relation for block-sparse signals, we define a block-coherence measure and show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-sparsity is shown to guarantee successful recovery through a mixed ℓ2/ℓ1 optimization approach. The significance of the results lies in the fact that making explicit use of block-sparsity can yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
Average case analysis of multichannel sparse recovery using convex relaxation This paper considers recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relaxation based on a mixed matrix norm. Typically, worst case analysis is carried out in order to analyze conditions under which the algorithms are able to recover any jointly sparse set of vectors. However, such an approach is not able to provide insights into why joint sparse recovery is superior to applying standard sparse reconstruction methods to each channel individually. Previous work considered an average case analysis of thresholding and SOMP by imposing a probability model on the measured signals. Here, the main focus is on analysis of convex relaxation techniques. In particular, the mixed l2,1 approach to multichannel recovery is investigated. Under a very mild condition on the sparsity and on the dictionary characteristics, measured for example by the coherence, it is shown that the probability of recovery failure decays exponentially in the number of channels. This demonstrates that most of the time, multichannel sparse recovery is indeed superior to single channel methods. The probability bounds are valid and meaningful even for a small number of signals. Using the tools developed to analyze the convex relaxation technique, also previous bounds for thresholding and SOMP recovery are tightened.
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K/logN)-r, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed
Fuzzy Sets
Construction of Interval-Valued Fuzzy Relations With Application to the Generation of Fuzzy Edge Images In this paper, we present a new construction method for interval-valued fuzzy relations (interval-valued fuzzy images) from fuzzy relations (fuzzy images) by vicinity. This construction method is based on the concepts of triangular norm ($t$-norm) and triangular conorm ( $t$-conorm). We analyze the effect of using different $t$-norms and $t$ -conorms. Furthermore, we examine the influence of different sizes of the submatrix around each element of a fuzzy relation on the interval-valued fuzzy relation. Finally, we apply our construction method to image processing, and we compare the results of our approach with those obtained by means of other, i.e., fuzzy and nonfuzzy, techniques.
Karhunen-Loève approximation of random fields by generalized fast multipole methods KL approximation of a possibly instationary random field a(ω, x) ∈ L2(Ω,dP; L∞(D)) subject to prescribed meanfield Ea(x) = ∫Ω, a (ω x) dP(ω) and covariance Va(x,x') = ∫Ω(a(ω, x) - Ea(x))(a(ω, x') - Ea(x')) dP(ω) in a polyhedral domain D ⊂ Rd is analyzed. We show how for stationary covariances Va(x,x') = ga(|x - x'|) with ga(z) analytic outside of z = 0, an M-term approximate KL-expansion aM(ω, x) of a(ω, x) can be computed in log-linear complexity. The approach applies in arbitrary domains D and for nonseparable covariances Ca. It involves Galerkin approximation of the KL eigenvalue problem by discontinuous finite elements of degree p ≥ 0 on a quasiuniform, possibly unstructured mesh of width h in D, plus a generalized fast multipole accelerated Krylov-Eigensolver. The approximate KL-expansion aM(X, ω) of a(x, ω) has accuracy O(exp(-bM1/d)) if ga is analytic at z = 0 and accuracy O(M-k/d) if ga is Ck at zero. It is obtained in O(MN(logN)b) operations where N = O(h-d).
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.
Is Gauss Quadrature Better than Clenshaw-Curtis? We compare the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw-Curtis. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of $\log((z+1)/(z-1))$ in the complex plane. Gauss quadrature corresponds to Padé approximation at $z=\infty$. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at $z=\infty$ is only half as high, but which is nevertheless equally accurate near $[-1,1]$.
Practical RDF schema reasoning with annotated semantic web data Semantic Web data with annotations is becoming available, being YAGO knowledge base a prominent example. In this paper we present an approach to perform the closure of large RDF Schema annotated semantic web data using standard database technology. In particular, we exploit several alternatives to address the problem of computing transitive closure with real fuzzy semantic data extracted from YAGO in the PostgreSQL database management system. We benchmark the several alternatives and compare to classical RDF Schema reasoning, providing the first implementation of annotated RDF schema in persistent storage.
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.1
0.025
0.009091
0.007692
0.005
0.00025
0
0
0
0
0
0
0
0
Resequencing considerations in parallel downloads Several recent studies have proposed methods to ac- celerate the receipt of a file by downloading its parts from differ- ent servers in parallel. This paper formulates models for an ap- proach based on receiving only one copy of each of the data pack- ets in a file, while different packets may be obtained from different sources. This approach guarantees faster downloads with lower network use. However, out-of-order arrivals at the receiving side are unavoidable. We present methods to keep out-of-order low to insure more regulated flow of packets to the application. Recent papers indicate that out-of-order arrivals have many unfavorable consequences. A good indicator to the severeness of out-of-order arrival is the resequencing-buffer occupancy. The paper focuses on the analysis of the resequencing-buffer occupancy distribution and on the analysis of the methods used to reduce the occupancy of the buffer.
Scalability and accuracy in a large-scale network emulator This paper presents ModelNet, a scalable Internet emulation environment that enables researchers to deploy unmodified software prototypes in a configurable Internet-like environment and subject them to faults and varying network conditions. Edge nodes running user-specified OS and application software are configured to route their packets through a set of ModelNet core nodes, which cooperate to subject the traffic to the bandwidth, congestion constraints, latency, and loss profile of a target network topology.This paper describes and evaluates the ModelNet architecture and its implementation, including novel techniques to balance emulation accuracy against scalability. The current ModelNet prototype is able to accurately subject thousands of instances of a distrbuted application to Internet-like conditions with gigabits of bisection bandwidth. Experiments with several large-scale distributed services demonstrate the generality and effectiveness of the infrastructure.
Traffic data repository at the WIDE project It becomes increasingly important for both network researchers and operators to know the trend of network traffic and to find anomaly in their network traffic. This paper describes an on-going effort within the WIDE project to collect a set of free tools to build a traffic data repository containing detailed information of our backbone traffic. Traffic traces are collected by tcpdump and, after removing privacy information, the traces are made open to the public. We review the issues on user privacy, and then, the tools used to build the WIDE traffic repository. We will report the current status and findings in the early stage of our IPv6 deployment.
Traffic Monitoring and Analysis, Second International Workshop, TMA 2010, Zurich, Switzerland, April 7, 2010, Proceedings
ENDE: An End-to-end Network Delay Emulator Tool for Multimedia Protocol Development Multimedia applications and protocols are constantly being developed to run over the Internet. A new protocol or application after being developed has to be tested on the real Internet or simulated on a testbed for debugging and performance evaluation. In this paper, we present a novel tool, ENDE, that can emulate end-to-end delays between two hosts without requiring access to the second host. The tool enables the user to test new multimedia protocols realistically on a single machine. In a delay-observing mode, ENDE can generate accurate traces of one-way delays between two hosts on the network. In a delay-impacting mode, ENDE can be used to simulate the functioning of a protocol or an application as if it were running on the network. We will show that ENDE allows accurate estimation of one-way transit times and hence can be used even when the forward and reverse paths are asymmetric between the two hosts. Experimental results are also presented to show that ENDE is fairly accurate in the delay-impacting mode.
QoE-based packet dropper controllers for multimedia streaming in WiMAX networks. The proliferation of broadband wireless facilities, together with the demand for multimedia applications, are creating a wireless multimedia era. In this scenario, the key requirement is the delivery of multimedia content with Quality of Service (QoS) and Quality of Experience (QoE) support for thousands of users (and access networks) in broadband in the wireless systems of the next generation.. This paper sets out new QoE-aware packet controller mechanisms to keep video streaming applications at an acceptable level of quality in Worldwide Interoperability for Microwave Access (WiMAX) networks. In periods of congestion, intelligent packet dropper mechanisms for IEEE 802.16 systems are triggered to drop packets in accordance with their impact on user perception, intra-frame dependence, Group of Pictures (GoP) and available wireless resources in service classes. The simulation results show that the proposed solutions reduce the impact of multimedia flows on the user's experience and optimize wireless network resources in periods of congestion.. The benefits of the proposed schemes were evaluted in a simulated WiMAX QoS/QoE environment, by using the following well-known QoE metrics: Peak Signal-to-Noise Ratio (PSNR), Video Quality Metric (VQM), Structural Similarity Index (SSIM) and Mean Option Score (MOS).
Visibility Of Individual Packet Losses In Mpeg-2 Video The ability of a human to visually detect whether a packet has been lost during the transport of compressed video depends heavily on the location of the packet loss and the content or the video. In this paper, we explore when humans can visually detect the error caused by individual packet losses. Using the results of a subjective test based on 1080 packet losses in 72 minutes of video, we design a classifier that uses objective factors extracted from the video to predict to visibility of each error. Our classifier achieves over 93% accuracy.
Adaptation strategies for streaming SVC video This paper aims to determine the best rate adaptation strategy to maximize the received video quality when streaming SVC video over the Internet. Different bandwidth estimation techniques are implemented for different transport protocols, such as using the TFRC rate when available or calculating the packet transmission rate otherwise. It is observed that controlling the rate of packets dispatched to the transport queue to match the video extraction rate resulted in oscillatory behavior in DCCP CCID3, decreasing the received video quality. Experimental results show that video should be sent at the maximum available network rate rather than at the extraction rate, provided that receiver buffer does not overflow. When the network is over-provisioned, the packet dispatch rate may also be limited with the maximum extractable video rate, to decrease the retransmission traffic without affecting the received video quality.
Techniques for measuring quality of experience Quality of Experience (QoE) relates to how users perceive the quality of an application. To capture such a subjective measure, either by subjective tests or via objective tools, is an art on its own. Given the importance of measuring users’ satisfaction to service providers, research on QoE took flight in recent years. In this paper we present an overview of various techniques for measuring QoE, thereby mostly focusing on freely available tools and methodologies.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Block-sparse signals: uncertainty relations and efficient recovery We consider efficient methods for the recovery of block-sparse signals--i.e., sparse signals that have nonzero entries occurring in clusters--from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed l2/l1-optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
The collapsing method of defuzzification for discretised interval type-2 fuzzy sets This paper proposes a new approach for defuzzification of interval type-2 fuzzy sets. The collapsing method converts an interval type-2 fuzzy set into a type-1 representative embedded set (RES), whose defuzzified values closely approximates that of the type-2 set. As a type-1 set, the RES can then be defuzzified straightforwardly. The novel representative embedded set approximation (RESA), to which the method is inextricably linked, is expounded, stated and proved within this paper. It is presented in two forms: Simple RESA: this approximation deals with the most simple interval FOU, in which a vertical slice is discretised into 2 points. Interval RESA: this approximation concerns the case in which a vertical slice is discretised into 2 or more points. The collapsing method (simple RESA version) was tested for accuracy and speed, with excellent results on both criteria. The collapsing method proved more accurate than the Karnik-Mendel iterative procedure (KMIP) for an asymmetric test set. For both a symmetric and an asymmetric test set, the collapsing method outperformed the KMIP in relation to speed.
The n-dimensional fuzzy sets and Zadeh fuzzy sets based on the finite valued fuzzy sets The connections among the n-dimensional fuzzy set, Zadeh fuzzy set and the finite-valued fuzzy set are established in this paper. The n-dimensional fuzzy set, a special L-fuzzy set, is first defined. It is pointed out that the n-dimensional fuzzy set is a generalization of the Zadeh fuzzy set, the interval-valued fuzzy set, the intuitionistic fuzzy set, the interval-valued intuitionistic fuzzy set and the three dimensional fuzzy set. Then, the definitions of cut set on n-dimensional fuzzy set and n-dimensional vector level cut set of Zadeh fuzzy set are presented. The cut set of the n-dimensional fuzzy set and n-dimensional vector level set of the Zadeh fuzzy set are both defined as n+1-valued fuzzy sets. It is shown that a cut set defined in this way has the same properties as a normal cut set of the Zadeh fuzzy set. Finally, by the use of these cut sets, decomposition and representation theorems of the n-dimensional fuzzy set and new decomposition and representation theorems of the Zadeh fuzzy set are constructed.
An Interval-Valued Intuitionistic Fuzzy Rough Set Model Given a widespread interest in rough sets as being applied to various tasks of data analysis it is not surprising at all that we have witnessed a wave of further generalizations and algorithmic enhancements of this original concept. This paper proposes an interval-valued intuitionistic fuzzy rough model by means of integrating the classical Pawlak rough set theory with the interval-valued intuitionistic fuzzy set theory. Firstly, some concepts and properties of interval-valued intuitionistic fuzzy set and interval-valued intuitionistic fuzzy relation are introduced. Secondly, a pair of lower and upper interval-valued intuitionistic fuzzy rough approximation operators induced from an interval-valued intuitionistic fuzzy relation is defined, and some properties of approximation operators are investigated in detail. Furthermore, by introducing cut sets of interval-valued intuitionistic fuzzy sets, classical representations of interval-valued intuitionistic fuzzy rough approximation operators are presented. Finally, the connections between special interval-valued intuitionistic fuzzy relations and interval-valued intuitionistic fuzzy rough approximation operators are constructed, and the relationships of this model and the others rough set models are also examined.
1.117442
0.13888
0.13888
0.13888
0.074123
0.002844
0.000284
0.000089
0.000025
0
0
0
0
0
Open-LTE: An Open LTE simulator for mobile video streaming Simulation is the optimal means to evaluate the booming researches on how to enhance the end-to-end service reliability of mobile video streaming over LTE network. However, to the best of our knowledge, all existing LTE network simulators provide simulations of relatively closed virtual networks, in which only meaningless tracing data can be simulated being delivered. Researches on mobile video streaming have not yet been fully supported. In light of this, herein, the open LTE simulator Open-Sim is made to provide the simulation of virtual LTE network with the ability to connect actual hosts over real wired link in realtime. The transport and application layer related logics of video streaming can be deployed on remote hosts and will no longer be limited by the simulator framework. Open-LTE is thus compatible with experimental studies on most aspects of mobile video streaming. Open-LTE is simple to use by providing a centralized configuration file to set up the LTE channel fading scenarios and interconnect real traffic with the virtual LTE network. We will demonstrate our work with QoE experiments on a live video streaming application.
Modeling and optimization of wireless local area network As wireless local area network technology is gaining popularity, performance analysis and optimization of it becomes more important. However, as compared to wired LAN, wireless channel is error-prone. Most of the existing work on the performance analysis of IEEE 802.11 distributed coordination function (DCF) assumes saturated traffic and ideal channel condition. In this paper, modeling of DCF is analyzed under a general traffic load and variable channel condition. A more realistic and comprehensive model is proposed to optimize the performance of DCF in both ideal and error-prone channels, and for both the basic scheme of DCF and DCF with four-way handshaking. Many factors, such as the number of contending nodes, the traffic load, contention window, packet overhead and channel condition, that affect the throughput and the delay of a wireless network have been incorporated. It is shown that under error-prone environment, a trade-off exists between the desire to reduce the ratio of overhead in the data packet by adopting a larger packet size, and the need to reduce the packet error rate by using a smaller packet length. Based on our analytical model, both the optimal packet size and the optimal minimum contention window are determined under various traffic loads and channel conditions. It is also observed that, in error-prone environments, optimal packet size has more significant improvement on the performance than optimal contention window. Our analytical model is validated via simulations using ns-2.
Simulating the Long Term Evolution (LTE) Downlink Physical Layer In this paper we investigate a comprehensive analysis of Long Term Evolution Advanced (LTE) downlink (DL) physical layer performance using Multi Input Multi Output channel (MIMO) based on standard parameters. The work consists firstly in modeling LTE physical downlink shared channel (PDSCH). The developed model is based on an independent functional blocks in order to facilitate reproduction of signal processing techniques results used in LTE and particularly to evaluate the physical layer downlink components. Thereafter, it was integrated in the simulator, basic structure with AWGN channel including evaluation of using diversity and spatial multiplexing transmissions on downlink connections and multipath fading channel model. The simulation examples are illustrated with different digital modulation and MIMO scheme. BER and throughput results with multipath impact on transmission channel quality are also considered. These results show that the model implemented in Matlab faithfully advantages introduced in the LTE system.
Performance comparison of a custom emulation-based test environment against a real-world LTE testbed Notwithstanding the value of Long Term Evolution (LTE) towards an improved user experience in next-generation networks, its associated high complexity is known to place computational and time burdens on testing tasks involving real-world platforms. Simulation is currently the tool most widely used to tackle this issue. LENA, for instance, is an open source simulator based on ns-3 that allows the design, evaluation, and validation of LTE networks. Despite of modeling the main LTE elements and interfaces, one limitation of LENA is that it docs not support the use of external traffic entities in conjunction with the simulation. In this paper, we describe how the ns-3 LENA LTE framework can be customized for use in an emulation-based test environment that allows a wider variety of real-world applications to be run over the simulated links. To validate our emulation results, we use as benchmark a testbed that differs from the aforementioned test environment in that the ns-3 server running the simulated network is replaced with a network made up of real-world platforms. Initial validation results, based on limited tests using an industry-standard VoIP test tool and iperf throughput tool, demonstrate that ns-3 LTE models can deliver voice quality and latency as good as an experimental testbed using actual LTE equipment over a range of signal-to-noise ratios. Similar conclusions arc also drawn for throughput, thus confirming the suitability of our emulation approach as a viable means to predict performance in real LTE networks. The good agreement of our experimental results is possible not only because the same functionality is implemented in both experiments but due to the use of the same traffic generation tools in the simulated and real-world LTE networks not -- possible in standard LENA simulation.
An open source product-oriented LTE network simulator based on ns-3 In this paper we present a new simulation module for ns-3 aimed at the simulation of LTE networks. This module has been designed with a product-oriented perspective in order to allow LTE equipment manufacturers to test RRM/SON algorithms in a simulation environment before they are deployed in the field. First, we describe the design of our simulation module, highlighting its novel aspects. Subsequently, we discuss the testing methodology that we adopted to validate its output. Finally, we present some experimental result to assess its performance in terms of execution time and memory usage.
Anaphora for everyone: pronominal anaphora resoluation without a parser We present an algorithm for anaphora resolution which is a modified and extended version of that developed by (Lappin and Leass, 1994). In contrast to that work, our algorithm does not require in-depth, full, syntactic parsing of text. Instead, with minimal compromise in output quality, the modifications enable the resolution process to work from the output of a part of speech tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. Evaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not---or cannot--- employ robust and reliable parsing components.
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
MapReduce: simplified data processing on large clusters MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Galerkin Finite Element Approximations of Stochastic Elliptic Partial Differential Equations We describe and analyze two numerical methods for a linear elliptic problem with stochastic coefficients and homogeneous Dirichlet boundary conditions. Here the aim of the computations is to approximate statistical moments of the solution, and, in particular, we give a priori error estimates for the computation of the expected value of the solution. The first method generates independent identically distributed approximations of the solution by sampling the coefficients of the equation and using a standard Galerkin finite element variational formulation. The Monte Carlo method then uses these approximations to compute corresponding sample averages. The second method is based on a finite dimensional approximation of the stochastic coefficients, turning the original stochastic problem into a deterministic parametric elliptic problem. A Galerkin finite element method, of either the h- or p-version, then approximates the corresponding deterministic solution, yielding approximations of the desired statistics. We present a priori error estimates and include a comparison of the computational work required by each numerical approximation to achieve a given accuracy. This comparison suggests intuitive conditions for an optimal selection of the numerical approximation.
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
Real-Time Convex Optimization in Signal Processing This article shows the potential for convex optimization methods to be much more widely used in signal processing. In particular, automatic code generation makes it easier to create convex optimization solvers that are made much faster by being designed for a specific problem family. The disciplined convex programming framework that has been shown useful in transforming problems to a standard form...
Selecting the advanced manufacturing technology using fuzzy multiple attributes group decision making with multiple fuzzy information Selection of advanced manufacturing technology in manufacturing system management is very important to determining manufacturing system competitiveness. This research develops a fuzzy multiple attribute decision-making applied in the group decision-making to improving advanced manufacturing technology selection process. Since numerous attributes have been considered in evaluating the manufacturing technology suitability, most information available in this stage is subjective, imprecise and vague, fuzzy sets theory provides a mathematical framework for modeling imprecision and vagueness. In the proposed approach, a new fusion method of fuzzy information is developed to managing information assessed in different linguistic scales (multi-granularity linguistic term sets) and numerical scales. The flexible manufacturing system adopted in the Taiwanese bicycle industry is employed in this study to demonstrate the computational process of the proposed method. Finally, sensitivity analysis can be performed to examine that the solution robustness.
Sparse Matrix Recovery from Random Samples via 2D Orthogonal Matching Pursuit Since its emergence, compressive sensing (CS) has attracted many researchers' attention. In the CS, recovery algorithms play an important role. Basis pursuit (BP) and matching pursuit (MP) are two major classes of CS recovery algorithms. However, both BP and MP are originally designed for one-dimensional (1D) sparse signal recovery, while many practical signals are two-dimensional (2D), e.g. image, video, etc. To recover 2D sparse signals effectively, this paper develops the 2D orthogonal MP (2D-OMP) algorithm, which shares the advantages of low complexity and good performance. The 2D-OMP algorithm can be widely used in those scenarios involving 2D sparse signal processing, e.g. image/video compression, compressive imaging, etc.
1.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
0
0
Compressive wireless sensing General Terms Compressive Sampling is an emerging theory that is based on the fact that a relatively small number of random pro-jections of a signal can contain most of its salient informa-tion. In this paper, we introduce the concept of Compressive Wireless Sensing for sensor networks in which a fusion center retrieves signal field information from an ensemble of spa-tially distributed sensor nodes. Energy and bandwidth are scarce resources in sensor networks and the relevant metrics of interest in our context are 1) the latency involved in in-formation retrieval; and 2) the associated power-distortion trade-o. It is generally recognized that given su cient prior knowledge about the sensed data (e. g., statistical character-ization, homogeneity etc. ), there exist schemes that have very favorable power-distortion-latency trade-o s. We pro-pose a distributed matched source-channel communication scheme, based in part on recent results in compressive sam-pling theory, for estimation of sensed data at the fusion cen-ter and analyze, as a function of number of sensor nodes, the trade-o s between power, distortion and latency. Compres-sive wireless sensing is a universal scheme in the sense that it requires no prior knowledge about the sensed data. This universality, however, comes at the cost of optimality (in terms of a less favorable power-distortion-latency trade-o ) and we quantify this cost relative to the case when su cient prior information about the sensed data is assumed.
RIDA: a robust information-driven data compression architecture for irregular wireless sensor networks In this paper, we propose and evaluate RIDA, a novel information-driven architecture for distributed data compression in a sensor network, allowing it to conserve energy and bandwidth and potentially enabling high-rate data sampling. The key idea is to determine the data correlation among a group of sensors based on the value of the data itself to significantly improve compression. Hence, this approach moves beyond traditional data compression schemes which rely only on spatial and temporal data correlation. A logical mapping, which assigns indices to nodes based on the data content, enables simple implementation, on nodes, of data transformation without any other information. The logical mapping approach also adapts particularly well to irregular sensor network topologies. We evaluate our architecture with both Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) on publicly available real-world data sets. Our experiments on both simulation and real data show that 30% of energy and 80-95% of the bandwidth can be saved for typical multi-hop data networks. Moreover, the original data can be retrieved after decompression with a low error of about 3%. Furthermore, we also propose a mechanism to detect and classify missing or faulty nodes, showing accuracy and recall of 95% when half of the nodes in the network are missing or faulty.
Practical data compression in wireless sensor networks: A survey Power consumption is a critical problem affecting the lifetime of wireless sensor networks. A number of techniques have been proposed to solve this issue, such as energy-efficient medium access control or routing protocols. Among those proposed techniques, the data compression scheme is one that can be used to reduce transmitted data over wireless channels. This technique leads to a reduction in the required inter-node communication, which is the main power consumer in wireless sensor networks. In this article, a comprehensive review of existing data compression approaches in wireless sensor networks is provided. First, suitable sets of criteria are defined to classify existing techniques as well as to determine what practical data compression in wireless sensor networks should be. Next, the details of each classified compression category are described. Finally, their performance, open issues, limitations and suitable applications are analyzed and compared based on the criteria of practical data compression in wireless sensor networks.
Multiresolution Spatial and Temporal Coding in a Wireless Sensor Network for Long-Term Monitoring Applications In many WSN (wireless sensor network) applications, such as [1], [2], [3], the targets are to provide long-term monitoring of environments. In such applications, energy is a primary concern because sensor nodes have to regularly report data to the sink and need to continuously work for a very long time so that users may periodically request a rough overview of the monitored environment. On the other hand, users may occasionally query more in-depth data of certain areas to analyze abnormal events. These requirements motivate us to propose a multiresolution compression and query (MRCQ) framework to support in-network data compression and data storage in WSNs from both space and time domains. Our MRCQ framework can organize sensor nodes hierarchically and establish multiresolution summaries of sensing data inside the network, through spatial and temporal compressions. In the space domain, only lower resolution summaries are sent to the sink; the other higher resolution summaries are stored in the network and can be obtained via queries. In the time domain, historical data stored in sensor nodes exhibit a finer resolution for more recent data, and a coarser resolution for older data. Our methods consider the hardware limitations of sensor nodes. So, the result is expected to save sensors' energy significantly, and thus, can support long-term monitoring WSN applications. A prototyping system is developed to verify its feasibility. Simulation results also show the efficiency of MRCQ compared to existing work.
Reduced complexity angle-Doppler-range estimation for MIMO radar that employs compressive sensing The authors recently proposed a MIMO radar system that is implemented by a small wireless network. By applying compressive sensing (CS) at the receive nodes, the MIMO radar super-resolution can be achieved with far fewer observations than conventional approaches. This previous work considered the estimation of direction of arrival and Doppler. Since the targets are sparse in the angle-velocity space, target information can be extracted by solving an ¿1 minimization problem. In this paper, the range information is exploited by introducing step frequency to MIMO radar with CS. The proposed approach is able to achieve high range resolution and also improve the ambiguous velocity. However, joint angle-Doppler-range estimation requires discretization of the angle-Doppler-range space which causes a sharp rise in the computational burden of the ¿1 minimization problem. To maintain an acceptable complexity, a technique is proposed to successively estimate angle, Doppler and range in a decoupled fashion. The proposed approach can significantly reduce the complexity without sacrificing performance.
Compressed Sensing for Networked Data Imagine a system with thousands or millions of independent components, all capable of generating and communicating data. A man-made system of this complexity was unthinkable a few decades ago, but today it is a reality - computers, cell phones, sensors, and actuators are all linked to the Internet, and every wired or wireless device is capable of generating and disseminating prodigious volumes of data. This system is not a single centrally-controlled device, rather it is an ever-growing patchwork of autonomous systems and components, perhaps more organic in nature than any human artifact that has come before. And we struggle to manage and understand this creation, which in many ways has taken on a life of its own. Indeed, several international conferences are dedicated to the scientific study of emergent Internet phenomena. This article considers a particularly salient aspect of this struggle that revolves around large- scale distributed sources of data and their storage, transmission, and retrieval. The task of transmitting information from one point to another is a common and well-understood exercise. But the problem of efficiently transmitting or sharing information from and among a vast number of distributed nodes remains a great challenge, primarily because we do not yet have well developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems. The problem is illustrated by a simple example. Consider a network of n nodes, each having a piece of information or data xj, j = 1,...,n. These data could be files to be shared, or simply scalar values corresponding to node attributes or sensor measurements. Let us assume that each xj is a scalar quantity for the sake of this illustration. Collectively these data x = (x1,...,xn)T, arranged in a vector, are called networked data to emphasize both the distributed nature of the data and the fact that they may be shared over the underlying communications infrastructure of the network. The networked data vector may be very large; n may be a thousand or a million or more.
An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.
Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon Meets Strang–Fix Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely . These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling
Mixed-signal parallel compressed sensing and reception for cognitive radio A parallel structure to do spectrum sensing in cognitive radio (CR) at sub-Nyquist rate is proposed. The structure is based on compressed sensing (CS) that exploits the sparsity of frequency utilization. Specifically, the received analog signal is segmented or time-windowed and CS is applied to each segment independently using an analog implementation of the inner product, then all the samples are processed together to reconstruct the signal. Applying the CS framework to the analog signal directly relaxes the requirements in wideband RF receiver front-ends. Moreover, the parallel structure provides a design flexibility and scalability on the sensing rate and system complexity. This paper also provides a joint reconstruction algorithm that optimally detects the information symbols from the sub-Nyquist analog projection coefficients. Simulations showing the efficiency of the proposed approach are also presented.
Compressed sensing performance bounds under Poisson noise This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is nonadditive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical l2 - l1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity-and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.
Practical Implementation of Stochastic Parameterized Model Order Reduction via Hermite Polynomial Chaos This paper describes the stochastic model order reduction algorithm via stochastic Hermite polynomials from the practical implementation perspective. Comparing with existing work on stochastic interconnect analysis and parameterized model order reduction, we generalized the input variation representation using polynomial chaos (PC) to allow for accurate modeling of non-Gaussian input variations. We also explore the implicit system representation using sub-matrices and improved the efficiency for solving the linear equations utilizing block matrix structure of the augmented system. Experiments show that our algorithm matches with Monte Carlo methods very well while keeping the algorithm effective. And the PC representation of non-Gaussian variables gains more accuracy than Taylor representation used in previous work (Wang et al., 2004).
Fuzzy logic in control systems: fuzzy logic controller. I.
Fuzzy sets and their applications to artificial intelligence
Granular Association Rules for Multiple Taxonomies: A Mass Assignment Approach The use of hierarchical taxonomies to organise information (or sets of objects) is a common approach for the semantic web and elsewhere, and is based on progressively finer granulations of objects. In many cases, seemingly crisp granulation disguises the fact that categories are based on loosely defined concepts that are better modelled by allowing graded membership. A related problem arises when different taxonomies are used, with different structures, as the integration process may also lead to fuzzy categories. Care is needed when information systems use fuzzy sets to model graded membership in categories - the fuzzy sets are not disjunctive possibility distributions, but must be interpreted conjunctively. We clarify this distinction and show how an extended mass assignment framework can be used to extract relations between fuzzy categories. These relations are association rules and are useful when integrating multiple information sources categorised according to different hierarchies. Our association rules do not suffer from problems associated with use of fuzzy cardinalities. Experimental results on discovering association rules in film databases and terrorism incident databases are demonstrated.
1.009108
0.012568
0.009877
0.007917
0.007407
0.002966
0.00127
0.000454
0.000115
0.000006
0
0
0
0
Residual Minimizing Model Interpolation for Parameterized Nonlinear Dynamical Systems. We present a method for approximating the solution of a parameterized, nonlinear dynamical system using an affine combination of solutions computed at other points in the input parameter space. The coefficients of the affine combination are computed with a nonlinear least squares procedure that minimizes the residual of the governing equations. The approximation properties of this residual minimizing scheme are comparable to existing reduced basis and POD-Galerkin model reduction methods, but its implementation requires only independent evaluations of the nonlinear forcing function. It is particularly appropriate when one wishes to approximate the states at a few points in time without time marching from the initial conditions. We prove some interesting characteristics of the scheme, including an interpolatory property, and we present heuristics for mitigating the effects of the ill-conditioning and reducing the overall cost of the method. We apply the method to representative numerical examples from kinetics-a three-state system with one parameter controlling the stiffness-and conductive heat transfer-a nonlinear parabolic PDE with a random field model for the thermal conductivity.
Model Reduction With MapReduce-enabled Tall and Skinny Singular Value Decomposition. We present a method for computing reduced-order models of parameterized partial differential equation solutions. The key analytical tool is the singular value expansion of the parameterized solution, which we approximate with a singular value decomposition of a parameter snapshot matrix. To evaluate the reduced-order model at a new parameter, we interpolate a subset of the right singular vectors to generate the reduced-order model's coefficients. We employ a novel method to select this subset that uses the parameter gradient of the right singular vectors to split the terms in the expansion, yielding a mean prediction and a prediction covariance-similar to a Gaussian process approximation. The covariance serves as a confidence measure for the reduced-order model. We demonstrate the efficacy of the reduced-order model using a parameter study of heat transfer in random media. The high-fidelity simulations produce more than 4TB of data; we compute the singular value decomposition and evaluate the reduced-order model using scalable MapReduce/Hadoop implementations. We compare the accuracy of our method with a scalar response surface on a set of temperature profile measurements and find that our model better captures sharp, local features in the parameter space.
Interpolatory Projection Methods for Parameterized Model Reduction We provide a unifying projection-based framework for structure-preserving interpolatory model reduction of parameterized linear dynamical systems, i.e., systems having a structured dependence on parameters that we wish to retain in the reduced-order model. The parameter dependence may be linear or nonlinear and is retained in the reduced-order model. Moreover, we are able to give conditions under which the gradient and Hessian of the system response with respect to the system parameters is matched in the reduced-order model. We provide a systematic approach built on established interpolatory $\mathcal{H}_2$ optimal model reduction methods that will produce parameterized reduced-order models having high fidelity throughout a parameter range of interest. For single input/single output systems with parameters in the input/output maps, we provide reduced-order models that are optimal with respect to an $\mathcal{H}_2\otimes\mathcal{L}_2$ joint error measure. The capabilities of these approaches are illustrated by several numerical examples from technical applications.
Parameter and State Model Reduction for Large-Scale Statistical Inverse Problems A greedy algorithm for the construction of a reduced model with reduction in both parameter and state is developed for an efficient solution of statistical inverse problems governed by partial differential equations with distributed parameters. Large-scale models are too costly to evaluate repeatedly, as is required in the statistical setting. Furthermore, these models often have high-dimensional parametric input spaces, which compounds the difficulty of effectively exploring the uncertainty space. We simultaneously address both challenges by constructing a projection-based reduced model that accepts low-dimensional parameter inputs and whose model evaluations are inexpensive. The associated parameter and state bases are obtained through a greedy procedure that targets the governing equations, model outputs, and prior information. The methodology and results are presented for groundwater inverse problems in one and two dimensions.
Tensor-Train Decomposition A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.
A polynomial chaos approach to stochastic variational inequalities. We consider stochastic elliptic variational inequalities of the second kind involving a bilinear form with stochastic diffusion coefficient. We prove existence and uniqueness of weak solutions, propose a stochastic Galerkin approximation of an equivalent parametric reformulation, and show equivalence to a related collocation method. Numerical experiments illustrate the efficiency of our approach and suggest similar error estimates as for linear elliptic problems.
Multilinear Analysis of Image Ensembles: TensorFaces Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. Multilinear algebra, the algebra of higher-order tensors, offers a potent mathematical framework for analyzing the multifactor structure of image ensembles and for addressing the difficult problem of disentangling the constituent factors or modes. Our multilinear modeling technique employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the N-mode SVD. As a concrete example, we consider the multilinear analysis of ensembles of facial images that combine several modes, including different facial geometries (people), expressions, head poses, and lighting conditions. Our resulting "TensorFaces" representation has several advantages over conventional eigenfaces. More generally, multilinear analysis shows promise as a unifying framework for a variety of computer vision problems.
A multiparameter moment-matching model-reduction approach for generating geometrically parameterized interconnect performance models In this paper, we describe an approach for generating accurate geometrically parameterized integrated circuit interconnect models that are efficient enough for use in interconnect synthesis. The model-generation approach presented is automatic, and is based on a multiparameter moment matching model-reduction algorithm. A moment-matching theorem proof for the algorithm is derived, as well as a complexity analysis for the model-order growth. The effectiveness of the technique is tested using a capacitance extraction example, where the plate spacing is considered as the geometric parameter, and a multiline bus example, where both wire spacing and wire width are considered as geometric parameters. Experimental results demonstrate that the generated models accurately predict capacitance values for the capacitor example, and both delay and cross-talk effects over a reasonably wide range of spacing and width variation for the multiline bus example.
A least-squares approximation of partial differential equations with high-dimensional random inputs Uncertainty quantification schemes based on stochastic Galerkin projections, with global or local basis functions, and also stochastic collocation methods in their conventional form, suffer from the so called curse of dimensionality: the associated computational cost grows exponentially as a function of the number of random variables defining the underlying probability space of the problem. In this paper, to overcome the curse of dimensionality, a low-rank separated approximation of the solution of a stochastic partial differential (SPDE) with high-dimensional random input data is obtained using an alternating least-squares (ALS) scheme. It will be shown that, in theory, the computational cost of the proposed algorithm grows linearly with respect to the dimension of the underlying probability space of the system. For the case of an elliptic SPDE, an a priori error analysis of the algorithm is derived. Finally, different aspects of the proposed methodology are explored through its application to some numerical experiments.
A capacitance solver for incremental variation-aware extraction Lithographic limitations and manufacturing uncertainties are resulting in fabricated shapes on wafer that are topologically equivalent, but geometrically different from the corresponding drawn shapes. While first-order sensitivity information can measure the change in pattern parasitics when the shape variations are small, there is still a need for a high-order algorithm that can extract parasitic variations incrementally in the presence of a large number of simultaneous shape variations. This paper proposes such an algorithm based on the well-known method of floating random walk (FRW). Specifically, we formalize the notion of random path sharing between several conductors undergoing shape perturbations and use it as a basis of a fast capacitance sensitivity extraction algorithm and a fast incremental variational capacitance extraction algorithm. The efficiency of these algorithms is further improved with a novel FRW method for dealing with layered media. Our numerical examples show a 10X speed up with respect to the boundary-element method adjoint or finite-difference sensitivity extraction, and more than 560X speed up with respect to a non-incremental FRW method for a high-order variational extraction.
Learning with dynamic group sparsity This paper investigates a new learning formulation called dynamic group sparsity. It is a natural extension of the standard sparsity concept in compressive sensing, and is motivated by the observation that in some practical sparse data the nonzero coefficients are often not random but tend to be clustered. Intuitively, better results can be achieved in these cases by reasonably utilizing both clustering and sparsity priors. Motivated by this idea, we have developed a new greedy sparse recovery algorithm, which prunes data residues in the iterative process according to both sparsity and group clustering priors rather than only sparsity as in previous methods. The proposed algorithm can recover stably sparse data with clustering trends using far fewer measurements and computations than current state-of-the-art algorithms with provable guarantees. Moreover, our algorithm can adaptively learn the dynamic group structure and the sparsity number if they are not available in the practical applications. We have applied the algorithm to sparse recovery and background subtraction in videos. Numerous experiments with improved performance over previous methods further validate our theoretical proofs and the effectiveness of the proposed algorithm.
Hesitant fuzzy entropy and cross-entropy and their use in multiattribute decision-making We introduce the concepts of entropy and cross-entropy for hesitant fuzzy information, and discuss their desirable properties. Several measure formulas are further developed, and the relationships among the proposed entropy, cross-entropy, and similarity measures are analyzed, from which we can find that three measures are interchangeable under certain conditions. Then we develop two multiattribute decision-making methods in which the attribute values are given in the form of hesitant fuzzy sets reflecting humans' hesitant thinking comprehensively. In one method, the weight vector is determined by the hesitant fuzzy entropy measure, and the optimal alternative is obtained by comparing the hesitant fuzzy cross-entropies between the alternatives and the ideal solutions; in another method, the weight vector is derived from the maximizing deviation method and the optimal alternative is obtained by using the TOPSIS method. An actual example is provided to compare our methods with the existing ones. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.
A continuous fuzzy Petri net tool for intelligent process monitoring and control This paper outlines the structure and implementational features of a new methodology for intelligent process control called continuous fuzzy Petri net (CFPN). This methodology integrates Petri nets, fuzzy logic, and real-time expert systems to produce a powerful tool for real-time process control. This tool has been applied to the monitoring of an oil refinery processing unit. The developed system can relieve the operator from monitoring sensor data information and allow him to concentrate on the higher level interpretation of the occurrence of events. Based on an object-oriented programming style, the paper discusses how the CFPN is constructed to provide a software support environment to aid process control engineering activities such as: system modeling, operational analysis, process monitoring, and control.
Efficient Decision-Making Scheme Based on LIOWAD. A new decision making method called linguistic induced ordered weighted averaging distance (LIOWAD) operator by using induced aggregation operators and linguistic information in the Hamming distance. This aggregation operator provides a parameterized family of linguistic aggregation operators that includes the maximum distance, the minimum distance, the linguistic normalized Hamming distance, the linguistic weighted Hamming distance and the linguistic ordered weighted averaging distance, among others. So give special attention to the analysis of different particular types of LIOWAD operators. End the paper with an application of the new approach in a decision making problem about selection of investments under linguistic environment.
1.069965
0.034537
0.023765
0.014261
0.006378
0.001111
0.000432
0.000146
0.00005
0.000004
0
0
0
0
Depth Map Coding With Distortion Estimation Of Rendered View New data formats that include both video and the corresponding depth maps, such as multiview plus depth (MVD), enable new video applications in which intermediate video views (virtual views) can be generated using the transmitted/stored video views (reference views) and the corresponding depth maps as inputs. We propose a depth map coding method based on a new distortion measurement by deriving relationships between distortions in coded depth map and rendered view. In our experiments we use a codec based on H.264/AVC tools, where the rate-distortion (RD) optimization for depth encoding makes use of the new distortion metric. Our experimental results show the efficiency of the proposed method, with coding gains of up to 1.6 dB in interpolated frame quality as compared to encoding the depth maps using the same coding tools but applying RD optimization based on conventional distortion metrics.
Shape-adaptivewavelet encoding of depth maps We present a novel depth-map codec aimed at free-viewpoint 3DTV. The proposed codec relies on a shape-adaptive wavelet transform and an explicit representation of the locations of major depth edges. Unlike classical wavelet transforms, the shape-adaptive transform generates small wavelet coefficients along depth edges, which greatly reduces the data entropy. The wavelet transform is implemented by shape-adaptive lifting, which enables fast computations and perfect reconstruction. We also develop a novel rate-constrained edge detection algorithm, which integrates the idea of significance bitplanes into the Canny edge detector. Along with a simple chain code, it provides an efficient way to extract and encode edges. Experimental results on synthetic and real data confirm the effectiveness of the proposed algorithm, with PSNR gains of 5 dB and more over the Middlebury dataset.
Depth Reconstruction Filter and Down/Up Sampling for Depth Coding in 3-D Video A depth image represents three-dimensional (3-D) scene information and is commonly used for depth image-based rendering (DIBR) to support 3-D video and free-viewpoint video applications. The virtual view is generally rendered by the DIBR technique and its quality depends highly on the quality of depth image. Thus, efficient depth coding is crucial to realize the 3-D video system. In this letter, w...
H.264-Based depth map sequence coding using motion information of corresponding texture video Three-dimensional television systems using depth-image-based rendering techniques are attractive in recent years. In those systems, a monoscopic two-dimensional texture video and its associated depth map sequence are transmitted. In order to utilize transmission bandwidth and storage space efficiently, the depth map sequence should be compressed as well as the texture video. Among previous works for depth map sequence coding, H.264 has shown the best performance; however, it has some disadvantages of requiring long encoding time and high encoder cost. In this paper, we propose a new coding structure for depth map coding with H.264 so as to reduce encoding time significantly while maintaining high compression efficiency. Instead of estimating motion vectors directly in the depth map, we generate candidate motion modes by exploiting motion information of the corresponding texture video. Experimental results show that the proposed algorithm reduces the complexity to 60% of the previous scheme that encodes two sequences separately and coding performance is also improved up to 1dB at low bit rates.
3d Video Coding Using The Synthesized View Distortion Change In 3D video, texture and supplementary depth data are coded to enable the interpolation of a required number of synthesized views for multi-view displays in the range of the original camera views. The coding of the depth data can be improved by analyzing the distortion of synthesized video views instead of the depth map distortion. Therefore, this paper introduces a new distortion metric for 3D video coding, which relates changes in the depth map directly to changes of the overall synthesized view distortion. It is shown how the new metric can be integrated into the rate-distortion optimization (RDO) process of an encoder, that is based on high-efficiency video coding technology. An evaluation of the modified encoder is conducted using different view synthesis algorithms and shows about 50% rate savings for the depth data or 0.6 dB PSNR gains for the synthesized view.
The emerging MVC standard for 3D video services Multiview video has gained a wide interest recently. The huge amount of data needed to be processed by multiview applications is a heavy burden for both transmission and decoding. The joint video team has recently devoted part of its effort to extend the widely deployed H.264/AVC standard to handle multiview video coding (MVC). The MVC extension of H.264/AVC includes a number of new techniques for improved coding efficiency, reduced decoding complexity, and new functionalities for multiview operations. MVC takes advantage of some of the interfaces and transport mechanisms introduced for the scalable video coding (SVC) extension of H.264/AVC, but the system level integration of MVC is conceptually more challenging as the decoder output may contain more than one view and can consist of any combination of the views with any temporal level. The generation of all the output views also requires careful consideration and control of the available decoder resources. In this paper, multiview applications and solutions to support generic multiview as well as 3D services are introduced. The proposed solutions, which have been adopted to the draft MVC specification, cover a wide range of requirements for 3D video related to interface, transport of the MVC bitstreams, and MVC decoder resource management. The features that have been introduced in MVC to support these solutions include marking of reference pictures, supporting for efficient view switching, structuring of the bitstream, signalling of view scalability supplemental enhancement information (SEI) and parallel decoding SEI.
Advanced residual prediction enhancement for 3D-HEVC Advanced residual prediction (ARP) is an efficient coding tool in 3D extension of HEVC (3D-HEVC) by exploiting the residual correlation between views. In the early version of ARP, when current prediction unit (PU) has a temporal reference picture, a reference block is firstly identified by a disparity vector and the residual predictor is then produced by aligning the motion information associated with current PU at the current view for motion compensation in the reference view. However, ARP is not allowed when current PU is predicted from an inter-view reference picture. Furthermore, the motion alignment during the ARP is done in a PU level, thus may not be good enough. In this paper, an enhanced ARP scheme is proposed to first extend ARP to the prediction of inter-view residual and then extending the motion alignment to a block level (which can be lower than PU). The proposed method has been partially adopted by 3D-HEVC. Experimental results demonstrate that the proposed scheme achieves 1.3 ~ 4.2% BD rate reduction for non-base views when compared to 3D-HEVC anchor with ARP enabled.
The Impact of Network and Protocol Heterogeneity on Real-Time Application QoS We evaluate the impact of network, and protocol heterogeneity on real-time application performance. We focus on TCP and UDP supportive role, also in the context of network stability and fairness. We reach several conclusions on the specific impact of wireless links, MPEG traffic friendliness, and TCP version efficiency. Beyond that, we also reach an unexpected result: UDP traffic is occasionally worse than TCP traffic when the right performance metric is used.
Fading correlation and its effect on the capacity of multielement antenna systems We investigate the effects of fading correlations in multielement antenna (MEA) communication systems. Pioneering studies showed that if the fades connecting pairs of transmit and receive antenna elements are independently, identically distributed, MEAs offer a large increase in capacity compared to single-antenna systems. An MEA system can be described in terms of spatial eigenmodes, which are single-input single-output subchannels. The channel capacity of an MEA is the sum of capacities of these subchannels. We show that the fading correlation affects the MEA capacity by modifying the distributions of the gains of these subchannels. The fading correlation depends on the physical parameters of MEA and the scatterer characteristics. In this paper, to characterize the fading correlation, we employ an abstract model, which is appropriate for modeling narrow-band Rayleigh fading in fixed wireless systems
On the power of adaption Optimal error bounds for adaptive and nonadaptive numerical methods are compared. Since the class of adaptive methods is much larger, a well-chosen adaptive method might seem to be better than any nonadaptive method. Nevertheless there are several results saying that under natural assumptions adaptive methods are not better than nonadaptive ones. There are also other results, however, saying that adaptive methods can be significantly better than nonadaptive ones as well as bounds on how much better they can be. It turns out that the answer to the “adaption problem” depends very much on what is known a priori about the problem in question; even a seemingly small change of the assumptions can lead to a different answer.
Statistical Timing Analysis: From Basic Principles to State of the Art Static-timing analysis (STA) has been one of the most pervasive and successful analysis engines in the design of digital circuits for the last 20 years. However, in recent years, the increased loss of predictability in semiconductor devices has raised concern over the ability of STA to effectively model statistical variations. This has resulted in extensive research in the so-called statistical STA (SSTA), which marks a significant departure from the traditional STA framework. In this paper, we review the recent developments in SSTA. We first discuss its underlying models and assumptions, then survey the major approaches, and close by discussing its remaining key challenges.
A gate delay model focusing on current fluctuation over wide-range of process and environmental variability This paper proposes a gate delay model that is suitable for timing analysis considering wide-range process and environmental variability. The proposed model focuses on current variation and its impact on delay is considered by replacing output load. The proposed model is applicable for large variability with current model constructed by DC analysis whose cost is small. The proposed model can also be used both in statistical static timing analysis and in conventional corner-based static timing analysis. Experimental results in a 90nm technology show that the gate delays of inverter, NAND and NOR are accurately estimated under gate length, threshold voltage, supply voltage and temperature fluctuation. We also verify that the proposed model can cope with slow input transition and RC output load. We demonstrate applicability to multiple-stage path delay and flip-flop delay, and show an application of sensitivity calculation for statistical timing analysis.
Proof of a conjecture of Metsch In this paper we prove a conjecture of Metsch about the maximum number of lines intersecting a pointset in PG(2,q), presented at the conference ''Combinatorics 2002''. As a consequence, we give a short proof of the famous Jamison, Brouwer and Schrijver bound on the size of the smallest affine blocking set in AG(2,q).
Heart rate and blood pressure estimation from compressively sensed photoplethysmograph. In this paper we consider the problem of low power SpO2 sensors for acquiring Photoplethysmograph (PPG) signals. Most of the power in SpO2 sensors goes to lighting red and infra-red LEDs. We use compressive sensing to lower the amount of time LEDs are lit, thereby reducing the signal acquisition power. We observe power savings by a factor that is comparable to the sampling rate. At the receiver, we reconstruct the signal with sufficient integrity for a given task. Here we consider the tasks of heart rate (HR) and blood pressure (BP) estimation. For BP estimation we use ECG signals along with the reconstructed PPG waveform. We show that the reconstruction quality can be improved at the cost of increasing compressed sensing bandwidth and receiver complexity for a given task. We present HR and BP estimation results using the MIMIC database.
1.030063
0.028906
0.021681
0.014495
0.006525
0.002824
0.000216
0.000045
0.000003
0
0
0
0
0
Generalized theory of uncertainty (GTU)-principal concepts and ideas Uncertainty is an attribute of information. The path-breaking work of Shannon has led to a universal acceptance of the thesis that information is statistical in nature. Concomitantly, existing theories of uncertainty are based on probability theory. The generalized theory of uncertainty (GTU) departs from existing theories in essential ways. First, the thesis that information is statistical in nature is replaced by a much more general thesis that information is a generalized constraint, with statistical uncertainty being a special, albeit important case. Equating information to a generalized constraint is the fundamental thesis of GTU. Second, bivalence is abandoned throughout GTU, and the foundation of GTU is shifted from bivalent logic to fuzzy logic. As a consequence, in GTU everything is or is allowed to be a matter of degree or, equivalently, fuzzy. Concomitantly, all variables are, or are allowed to be granular, with a granule being a clump of values drawn together by a generalized constraint. And third, one of the principal objectives of GTU is achievement of NL-capability, that is, the capability to operate on information described in natural language. NL-capability has high importance because much of human knowledge, including knowledge about probabilities, is described in natural language. NL-capability is the focus of attention in the present paper. The centerpiece of GTU is the concept of a generalized constraint. The concept of a generalized constraint is motivated by the fact that most real-world constraints are elastic rather than rigid, and have a complex structure even when simple in appearance. The paper concludes with examples of computation with uncertain information described in natural language.
A fuzzy-based methodology for the analysis of diabetic neuropathy A new model for the fuzzy-based analysis of diabetic neuropathy is illustrated, whose pathogenesis so far is not well known. The underlying algebraic structure is a commutative l-monoid, whose support is a set of classifications based on the concept of linguistic variable introduced by Zadeh. The analysis is carried out by means of patient's anagraphical and clinical data, e.g. age, sex, duration of the disease, insulinic needs, severity of diabetes, possible presence of complications. The results obtained by us are identical with medical diagnoses. Moreover, analyzing suitable relevance factors one gets reasonable information about the etiology of the disease, our results agree with most credited clinical hypotheses.
Handling partial truth on type-2 similarity-based reasoning Representation and manipulation of the vague concepts of partially true knowledge in the development of machine intelligence is a wide and challenging field of study. How to extract of approximate facts from vague and partially true statements has drawn significant attention from researchers in the fuzzy information processing. Furthermore, handling uncertainty from this incomplete information has its own necessity. This study theoretically examines a formal method for representing and manipulating partially true knowledge. This method is based on the similarity measure of type-2 fuzzy sets, which are directly used to handle rule uncertainties that type-1 fuzzy sets cannot. The proposed type-2 similarity-based reasoning method is theoretically defined and discussed herein, and the reasoning results are applied to show the usefulness with the comparison of the general fuzzy sets.
Fuzzy time series prediction method based on fuzzy recurrent neural network One of the frequently used forecasting methods is the time series analysis. Time series analysis is based on the idea that past data can be used to predict the future data. Past data may contain imprecise and incomplete information coming from rapidly changing environment. Also the decisions made by the experts are subjective and rest on their individual competence. Therefore, it is more appropriate for the data to be presented by fuzzy numbers instead of crisp numbers. A weakness of traditional crisp time series forecasting methods is that they process only measurement based numerical information and cannot deal with the perception-based historical data represented by fuzzy numbers. Application of a fuzzy time series whose values are linguistic values, can overcome the mentioned weakness of traditional forecasting methods. In this paper we propose a fuzzy recurrent neural network (FRNN) based fuzzy time series forecasting method using genetic algorithm. The effectiveness of the proposed fuzzy time series forecasting method is tested on benchmark examples.
Fuzzy Regression Analysis by Support Vector Learning Approach Support vector machines (SVMs) have been very successful in pattern classification and function approximation problems for crisp data. In this paper, we incorporate the concept of fuzzy set theory into the support vector regression machine. The parameters to be estimated in the SVM regression, such as the components within the weight vector and the bias term, are set to be the fuzzy numbers. This integration preserves the benefits of SVM regression model and fuzzy regression model and has been attempted to treat fuzzy nonlinear regression analysis. In contrast to previous fuzzy nonlinear regression models, the proposed algorithm is a model-free method in the sense that we do not have to assume the underlying model function. By using different kernel functions, we can construct different learning machines with arbitrary types of nonlinear regression functions. Moreover, the proposed method can achieve automatic accuracy control in the fuzzy regression analysis task. The upper bound on number of errors is controlled by the user-predefined parameters. Experimental results are then presented that indicate the performance of the proposed approach.
Two-sample hypothesis tests of means of a fuzzy random variable In this paper we will consider some two-sample hypothesis tests for means concerning a fuzzy random variable in two populations. For this purpose, we will make use of a generalized metric for fuzzy numbers, and we will develop an exact study for the case of normal fuzzy random variables and an asymptotic study for the case of simple general fuzzy random variables.
Uncertainty bounds and their use in the design of interval type-2 fuzzy logic systems We derive inner- and outer-bound sets for the type-reduced set of an interval type-2 fuzzy logic system (FLS), based on a new mathematical interpretation of the Karnik-Mendel iterative procedure for computing the type-reduced set. The bound sets can not only provide estimates about the uncertainty contained in the output of an interval type-2 FLS, but can also be used to design an interval type-2 FLS. We demonstrate, by means of a simulation experiment, that the resulting system can operate without type-reduction and can achieve similar performance to one that uses type-reduction. Therefore, our new design method, based on the bound sets, can relieve the computation burden of an interval type-2 FLS during its operation, which makes an interval type-2 FLS useful for real-time applications.
Fuzzy Grey Gm(1,1) Model Under Fuzzy System Grey GM(1, 1) forecasting model is a kind of short-term forecasting method which has been successfully applied in management and engineering problems with as little as four data. However, when a new system is constructed, the system is uncertain and variable so that the collected data is usually of fuzzy type, which could not be applied to grey GM(1, 1) model forecast. In order to cope with such problem, the fuzzy system derived from collected data is considered by the fuzzy grey controlled variable to derive a fuzzy grey GM(1, 1) model to forecast the extrapolative values under the fuzzy system. Finally, an example is described for illustration.
A hybrid recommender system for the selective dissemination of research resources in a Technology Transfer Office Recommender systems could be used to help users in their access processes to relevant information. Hybrid recommender systems represent a promising solution for multiple applications. In this paper we propose a hybrid fuzzy linguistic recommender system to help the Technology Transfer Office staff in the dissemination of research resources interesting for the users. The system recommends users both specialized and complementary research resources and additionally, it discovers potential collaboration possibilities in order to form multidisciplinary working groups. Thus, this system becomes an application that can be used to help the Technology Transfer Office staff to selectively disseminate the research knowledge and to increase its information discovering properties and personalization capacities in an academic environment.
Designing Type-1 and Type-2 Fuzzy Logic Controllers via Fuzzy Lyapunov Synthesis for nonsmooth mechanical systems In this paper, Fuzzy Lyapunov Synthesis is extended to the design of Type-1 and Type-2 Fuzzy Logic Controllers for nonsmooth mechanical systems. The output regulation problem for a servomechanism with nonlinear backlash is proposed as a case of study. The problem at hand is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of the nonminimum phase properties of the system. Performance issues of the Type-1 and Type-2 Fuzzy Logic Regulators that were designed are illustrated in experimental studies.
Control of a nonlinear continuous bioreactor with bifurcation by a type-2 fuzzy logic controller The object of this paper is the application of a type-2 fuzzy logic controller to a nonlinear system that presents bifurcations. A bifurcation can cause instability in the system or can create new working conditions which, although stable, are unacceptable. The only practical solution for an efficient control is the use of high performance controllers that take into account the uncertainties of the process. A type-2 fuzzy logic controller is tested by simulation on a nonlinear bioreactor system that is characterized by a transcritical bifurcation. Simulation results show the validity of the proposed controllers in preventing the system from reaching bifurcation and instable or undesirable stable conditions.
More on maximal intersecting families of finite sets New upper bounds for the size of minimal maximal k -cliques are obtained. We show (i) m ( k )⩽ k 5 for all k ; (ii) m(k)⩽ 3 4 k 2 + 3 2 k − 1 , if k is a prime power.
A hierarchical fuzzy system with high input dimensions for forecasting foreign exchange rates Fuzzy systems suffer from the curse of dimension- ality as the number of rules increases exponentially with the number of input dimensions. Although several methods have been proposed for eliminating the combinatorial rule explosion, none of them is fully satisfactory as there are no known fuzzy systems that can handle a large number of inputs so far. In this paper, we describe a method for building fuzzy systems with high input dimensions based on the hierarchical architecture and the MacVicar-Whelan meta-rules. The proposed method is fully automated since a complete fuzzy system is generated from sample input-output data using an Evolutionary Algorithm. We tested the method by building fuzzy systems for two different applications, namely the forecasting of the Mexican and Argentinan pesos exchange rates. In both cases, our approach was successful as both fuzzy systems performed very well.
A robust periodic arnoldi shooting algorithm for efficient analysis of large-scale RF/MM ICs The verification of large radio-frequency/millimeter-wave (RF/MM) integrated circuits (ICs) has regained attention for high-performance designs beyond 90nm and 60GHz. The traditional time-domain verification by standard Krylov-subspace based shooting method might not be able to deal with newly increased verification complexity. The numerical algorithms with small computational cost yet superior convergence are highly desired to extend designers' creativity to probe those extremely challenging designs of RF/MM ICs. This paper presents a new shooting algorithm for periodic RF/MM-IC systems. Utilizing a periodic structure of the state matrix, a periodic Arnoldi shooting algorithm is developed to exploit the structured Krylov-subspace. This leads to an improved efficiency and convergence. Results from several industrial examples show that the proposed periodic Arnoldi shooting method, called PAS, is 1000 times faster than the direct-LU and the explicit GMRES methods. Moreover, when compared to the existing industrial standard, a matrix-free GMRES with non-structured Krylov-subspace, the new PAS method reduces iteration number and runtime by 3 times with the same accuracy.
1.014927
0.026051
0.015
0.012593
0.007296
0.002265
0.000335
0.000061
0.000019
0.000006
0
0
0
0
Linguistic Hedges in an Intuitionistic Fuzzy Setting Slowly but surely, intuitionistic fuzzy sets are giving away their secrets. By tracing them back to the un- derlying algebraic structure that they are defined on (a complete lattice), they can be embedded in the well- known class of L-fuzzy sets, whose formal treatment allows the definition and study of order-theoretic con- cepts such as triangular norms and conorms, negators and implicators, as well as the development of more complex operations such as direct and superdirect im- age,. . . In this paper we use the latter for the represen- tation of linguistic hedges. We study their behaviour w.r.t. hesitation, and we examine how in this frame- work modification of an intuitionistic fuzzy set can be constructed from separate modification of its member- ship and non-membership function.
A new method for multiattribute decision making using interval-valued intuitionistic fuzzy values.
Some Properties of Fuzzy Sets of Type 2
Numerical and symbolic approaches to uncertainty management in AI Dealing with uncertainty is part of most intelligent behaviour and therefore techniques for managing uncertainty are a critical step in producing intelligent behaviour in machines. This paper discusses the concept of uncertainty and approaches that have been devised for its management in AI and expert systems. These are classified as quantitative (numeric) (Bayesian methods, Mycin's Certainty Factor model, the Dempster-Shafer theory of evidence and Fuzzy Set theory) or symbolic techniques (Nonmonotonic/Default Logics, Cohen's theory of Endorsements, and Fox's semantic approach). Each is discussed, illustrated, and assessed in relation to various criteria which illustrate the relative advantages and disadvantages of each technique. The discussion summarizes some of the criteria relevant to selecting the most appropriate uncertainty management technique for a particular application, emphasizes the differing functionality of the approaches, and outlines directions for future research. This includes combining qualitative and quantitative representations of information within the same application to facilitate different kinds of uncertainty management and functionality.
Interval-valued Fuzzy Sets, Possibility Theory and Imprecise Probability Interval-valued fuzzy sets were proposed thirty years ago as a natural extension of fuzzy sets. Many variants of these mathematical objects ex- ist, under various names. One popular variant proposed by Atanassov starts by the specification of membership and non-membership functions. This paper focuses on interpretations of such ex- tensions of fuzzy sets, whereby the two member- ship functions that define them can be justified in the scope of some information representation paradigm. It particularly focuses on a recent pro- posal by Neumaier, who proposes to use interval- valued fuzzy sets under the name "clouds", as an e! cient method to represent a family of proba- bilities. We show the connection between clouds, interval-valued fuzzy sets and possibility theory.
Implication in intuitionistic fuzzy and interval-valued fuzzy set theory: construction, classification, application With the demand for knowledge-handling systems capable of dealing with and distinguishing between various facets of imprecision ever increasing, a clear and formal characterization of the mathematical models implementing such services is quintessential. In this paper, this task is undertaken simultaneously for the definition of implication within two settings: first, within intuitionistic fuzzy set theory and secondly, within interval-valued fuzzy set theory. By tracing these models back to the underlying lattice that they are defined on, on one hand we keep up with an important tradition of using algebraic structures for developing logical calculi (e.g. residuated lattices and MV algebras), and on the other hand we are able to expose in a clear manner the two models’ formal equivalence. This equivalence, all too often neglected in literature, we exploit to construct operators extending the notions of classical and fuzzy implication on these structures; to initiate a meaningful classification framework for the resulting operators, based on logical and extra-logical criteria imposed on them; and finally, to re(de)fine the intuititive ideas giving rise to both approaches as models of imprecision and apply them in a practical context.
Level sets and the extension principle for interval valued fuzzy sets and its application to uncertainty measures We describe the representation of a fuzzy subset in terms of its crisp level sets. We then generalize these level sets to the case of interval valued fuzzy sets and provide for a representation of an interval valued fuzzy set in terms of crisp level sets. We note that in this representation while the level sets are crisp the memberships are still intervals. Once having this representation we turn to its role in the extension principle and particularly to the extension of measures of uncertainty of interval valued fuzzy sets. Two types of extension of uncertainty measures are investigated. The first, based on the level set representation, leads to extensions whose values for the measure of uncertainty are themselves fuzzy sets. The second, based on the use of integrals, results in extensions whose value for the uncertainty of an interval valued fuzzy sets is an interval.
Pattern recognition using type-II fuzzy sets Type II fuzzy sets are a generalization of the ordinary fuzzy sets in which the membership value for each member of the set is itself a fuzzy set in [0, 1]. We introduce a similarity measure for measuring the similarity, or compatibility, between two type-II fuzzy sets. With this new similarity measure we show that type-II fuzzy sets provide us with a natural language for formulating classification problems in pattern recognition.
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
The impact of fuzziness in social choice paradoxes Since Arrow's main theorem showed the impossibility of a rational procedure in group decision making, many variations in restrictions and objectives have been introduced in order to find out the limits of such a negative result. But so far all those results are often presented as a proof of the great expected difficulties we always shall find pursuing a joint group decision from different individual opinions, if we pursue rational and ethical procedures. In this paper we shall review some of the alternative approaches fuzzy sets theory allows, showing among other things that the main assumption of Arrow's model, not being made explicit in his famous theorem, was its underlying binary logic (a crisp definition is implied in preferences, consistency, liberty, equality, consensus and every concept or piece of information). Moreover, we shall also point out that focusing the problem on the choice issue can be also misleading, at least when dealing with human behaviour.
Singularity detection and processing with wavelets The mathematical characterization of singularities with Lipschitz exponents is reviewed. Theorems that estimate local Lipschitz exponents of functions from the evolution across scales of their wavelet transform are reviewed. It is then proven that the local maxima of the wavelet transform modulus detect the locations of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations has a particular behavior that is studied separately. The local frequency of such oscillations is measured from the wavelet transform modulus maxima. It has been shown numerically that one- and two-dimensional signals can be reconstructed, with a good approximation, from the local maxima of their wavelet transform modulus. As an application, an algorithm is developed that removes white noises from signals by analyzing the evolution of the wavelet transform maxima across scales. In two dimensions, the wavelet transform maxima indicate the location of edges in images.<>
A precorrected-FFT method for electrostatic analysis of complicated 3-D structures In this paper we present a new algorithm for accelerating the potential calculation which occurs in the inner loop of iterative algorithms for solving electromagnetic boundary integral equations. Such integral equations arise, for example, in the extraction of coupling capacitances in three-dimensional (3-D) geometries. We present extensive experimental comparisons with the capacitance extraction code FASTCAP and demonstrate that, for a wide variety of geometries commonly encountered in integrated circuit packaging, on-chip interconnect and micro-electro-mechanical systems, the new “precorrected-FFT” algorithm is superior to the fast multipole algorithm used in FASTCAP in terms of execution time and memory use. At engineering accuracies, in terms of a speed-memory product, the new algorithm can be superior to the fast multipole based schemes by more than an order of magnitude
Statistical Analysis and Process Variation-Aware Routing and Skew Assignment for FPGAs With constant scaling of process technologies, chip design is becoming increasingly difficult due to process variations. The FPGA community has only recently started focusing on the effects of variations. In this work we present a statistical analysis to compare the effects of variations on designs mapped to FPGAs and ASICs. We also present CAD and architecture techniques to mitigate the impact of variations. First we present a variation-aware router that optimizes statistical criticality. We then propose a modification to the clock network to deliver programmable skews to different flip-flops. Finally, we combine the two techniques and the result is a 9x reduction in yield loss that translates to a 12% improvement in timing yield. When the desired timing yield is set to 99%, our combined statistical routing and skew assignment technique results in a delay improvement of about 10% over a purely deterministic approach.
L1 Projections with Box Constraints We study the L1 minimization problem with additional box constraints. We motivate the problem with two different views of optimality considerations. We look into imposing such constraints in projected gradient techniques and propose a worst case linear time algorithm to perform such projections. We demonstrate the merits and effectiveness of our algorithms on synthetic as well as real experiments.
1.201991
0.001892
0.001698
0.000879
0.000638
0.000319
0.000189
0.000072
0.000012
0.000002
0
0
0
0
RMIT3DV: Pre-announcement of a creative commons uncompressed HD 3D video database There has been much recent interest, both from industry and research communities, in 3D video technologies and processing techniques. However, with the standardisation of 3D video coding well underway and researchers studying 3D multimedia delivery and users' quality of multimedia experience in 3D video environments, there exist few publicly available databases of 3D video content. Further, there are even fewer sources of uncompressed 3D video content for flexible use in a number of research studies and applications. This paper thus presents a preliminary version of RMIT3DV: an uncompressed HD 3D video database currently composed of 31 video sequences that encompass a range of environments, lighting conditions, textures, motion, etc. The database was natively filmed on a professional HD 3D camera, and this paper describes the 3D film production workflow in addition to the database distribution and potential future applications of the content. The database is freely available online via the creative commons license, and researchers are encouraged to contribute 3D content to grow the resource for the (HD) 3D video research community.
Audiovisual Quality Components. The perceived quality of an audiovisual sequence is heavily influenced by both the quality of the audio and the quality of the video. The question then arises as to the relative importance of each factor and whether a regression model predicting audiovisual quality can be devised that is generally applicable.
A New EDI-based Deinterlacing Algorithm In this paper, we propose a new deinterlacing algorithm using edge direction field, edge parity, and motion expansion scheme. The algorithm consists of an EDI (edge dependent interpolation)-based intra-field deinterlacing and inter-field deinterlacing that uses block-based motion detection. Most of the EDI algorithms use pixel-by-pixel or block-by-block distance to estimate the edge direction, which results in many annoying artifacts. We propose the edge direction field, and estimate an interpolation direction using the field and SAD (sum of absolute differences) values. The edge direction field is a set of edge orientations and their gradient magnitudes. The proposed algorithm assumes that a local minimum around the gradient edge field is most probably the true edge direction. Our approach provides good visual results in various kinds of edges (horizontal, narrow and weak). And we propose a new temporal interpolation method based on block motion detection. The algorithm works reliably in scenes which have very fast moving objects and low SNR signals. Experimental results on various data set show that the proposed algorithm works well for the diverse kinds of sequences and reconstructs flicker-free details in the static region1.
What Makes a Professional Video? A Computational Aesthetics Approach Understanding the characteristics of high-quality professional videos is important for video classification, video quality measurement, and video enhancement. A professional video is good not only for its interesting story but also for its high visual quality. In this paper, we study what makes a professional video from the perspective of aesthetics. We discuss how a professional video is created and correspondingly design a variety of features that distinguish professional videos from amateur ones. We study general aesthetics features that are applied to still photos and extend them to videos. We design a variety of features that are particularly relevant to videos. We examined the performance of these features in the problem of professional and amateur video classification. Our experiments show that with these features, 97.3% professional and amateur shot classification accuracy rate is achieved on our own data set and 91.2% professional video detection rate is achieved on a public professional video set. Our experiments also show that the features that are particularly for videos are shown most effective for this task.
An effective de-interlacing technique using two types of motion information In this paper, we propose a new de-interlacing algorithm using two types of motion information, i.e., the block-based and the pixel-based motion information. In the proposed scheme, block-wise motion is first calculated using the frame differences. Then, it is refined by the pixel-based motion information. The results of hardware implementation show that the proposed scheme using block-wise motion is more robust to noise than the conventional schemes using pixel-wise motion. Also, the proposed spatial interpolation provides a good visual performance in the case of moving diagonal edges.
Qos modeling for performance evaluation over evolved 3g networks The end-to-end Quality of Service (QoS) must be ensured along the whole network in order to achieve the desired service quality for the end user. In hybrid wired-wireless networks, the wireless subsystem is usually the bottleneck of the whole network. The aim of our work is to obtain a QoS model to evaluate the performance of data services over evolved 3G radio links. This paper focuses on the protocols and mechanisms at the radio interface, which is a variable-rate multiuser and multichannel subsystem. Proposed QoS models for such scenario include selective retransmissions, adaptive modulation and coding, as well as a cross-layer mechanism that allows the link layer to adapt itself to a dynamically changing channel state. The proposed model is based on a bottom-up approach, which considers the cumulative performance degradation along protocol layers and predicts the performance of different services in specific environments. Numerical parameters at the physical layer resemble those proposed for 3GPP Long Term Evolution (LTE). By means of both analytical (wherever possible) and semi-analytical methods, streaming service quality indicators have been evaluated at different radio layers.
Overcoming the effects of correlation in packet delay measurements using inter-packet gaps The end-to-end delay of packets in data streams is characterized with emphasis on effects due to cross traffic, sending rate and packet size. Measurements indicate that modeling delay of a packet stream with high sending rates, as a fraction of bandwidth, is difficult due to the correlations among the delay values. The correlations among inter-packet gaps (IPG) at these rates, however, are negligible. At lower sending rates, the delay correlations are negligible and the distribution of delay can be used as a delay model. We exploit the relation between delay and IPG to show that end-to-end delay can be approximated by a Markov process. Thus, a complete solution is presented for modeling delay for all sending rates. Further, a correlation estimation model is provided for delay and IPG values.
Managing Quality of Experience for Wireless VOIP Using Noncooperative Games. We model the user&#39;s quality of experience (QoE) in a wireless voice over IP (VoIP) service as a function of the amount of effort the user has to put to continue her conversation. We assume that users would quit or terminate an ongoing call if they have to put more efforts than they could tolerate. Not knowing the tolerance threshold of each individual user, the service provider faces a decision di...
View synthesis prediction for multiview video coding We propose a rate-distortion-optimized framework that incorporates view synthesis for improved prediction in multiview video coding. In the proposed scheme, auxiliary information, including depth data, is encoded and used at the decoder to generate the view synthesis prediction data. The proposed method employs optimal mode decision including view synthesis prediction, and sub-pixel reference matching to improve prediction accuracy of the view synthesis prediction. Novel variants of the skip and direct modes are also presented, which infer the depth and correction vector information from neighboring blocks in a synthesized reference picture to reduce the bits needed for the view synthesis prediction mode. We demonstrate two multiview video coding scenarios in which view synthesis prediction is employed. In the first scenario, the goal is to improve the coding efficiency of multiview video where block-based depths and correction vectors are encoded by CABAC in a lossless manner on a macroblock basis. A variable block-size depth/motion search algorithm is described. Experimental results demonstrate that view synthesis prediction does provide some coding gains when combined with disparity-compensated prediction. In the second scenario, the goal is to use view synthesis prediction for reducing rate overhead incurred by transmitting depth maps for improved support of 3DTV and free-viewpoint video applications. It is assumed that the complete depth map for each view is encoded separately from the multiview video and used at the receiver to generate intermediate views. We utilize this information for view synthesis prediction to improve overall coding efficiency. Experimental results show that the rate overhead incurred by coding depth maps of varying quality could be offset by utilizing the proposed view synthesis prediction techniques to reduce the bitrate required for coding multiview video.
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
Incremental refinement of image salient-point detection. Low-level image analysis systems typically detect "points of interest", i.e., areas of natural images that contain corners or edges. Most of the robust and computationally efficient detectors proposed for this task use the autocorrelation matrix of the localized image derivatives. Although the performance of such detectors and their suitability for particular applications has been studied in relevant literature, their behavior under limited input source (image) precision or limited computational or energy resources is largely unknown. All existing frameworks assume that the input image is readily available for processing and that sufficient computational and energy resources exist for the completion of the result. Nevertheless, recent advances in incremental image sensors or compressed sensing, as well as the demand for low-complexity scene analysis in sensor networks now challenge these assumptions. In this paper, we investigate an approach to compute salient points of images incrementally, i.e., the salient point detector can operate with a coarsely quantized input image representation and successively refine the result (the derived salient points) as the image precision is successively refined by the sensor. This has the advantage that the image sensing and the salient point detection can be terminated at any input image precision (e.g., bound set by the sensory equipment or by computation, or by the salient point accuracy required by the application) and the obtained salient points under this precision are readily available. We focus on the popular detector proposed by Harris and Stephens and demonstrate how such an approach can operate when the image samples are refined in a bitwise manner, i.e., the image bitplanes are received one-by-one from the image sensor. We estimate the required energy for image sensing as well as the computation required for the salient point detection based on stochastic source modeling. The computation and energy required by the proposed incremental refinement approach is compared against the conventional salient-point detector realization that operates directly on each source precision and cannot refine the result. Our experiments demonstrate the feasibility of incremental approaches for salient point detection in various classes of natural images. In addition, a first comparison between the results obtained by the intermediate detectors is presented and a novel application for adaptive low-energy image sensing based on points of saliency is presented.
RSPOP: rough set-based pseudo outer-product fuzzy rule identification algorithm. System modeling with neuro-fuzzy systems involves two contradictory requirements: interpretability verses accuracy. The pseudo outer-product (POP) rule identification algorithm used in the family of pseudo outer-product-based fuzzy neural networks (POPFNN) suffered from an exponential increase in the number of identified fuzzy rules and computational complexity arising from high-dimensional data. This decreases the interpretability of the POPFNN in linguistic fuzzy modeling. This article proposes a novel rough set-based pseudo outer-product (RSPOP) algorithm that integrates the sound concept of knowledge reduction from rough set theory with the POP algorithm. The proposed algorithm not only performs feature selection through the reduction of attributes but also extends the reduction to rules without redundant attributes. As many possible reducts exist in a given rule set, an objective measure is developed for POPFNN to correctly identify the reducts that improve the inferred consequence. Experimental results are presented using published data sets and real-world application involving highway traffic flow prediction to evaluate the effectiveness of using the proposed algorithm to identify fuzzy rules in the POPFNN using compositional rule of inference and singleton fuzzifier (POPFNN-CRI(S)) architecture. Results showed that the proposed rough set-based pseudo outer-product algorithm reduces computational complexity, improves the interpretability of neuro-fuzzy systems by identifying significantly fewer fuzzy rules, and improves the accuracy of the POPFNN.
Reweighted minimization model for MR image reconstruction with split Bregman method. Magnetic resonance (MR) image reconstruction is to get a practicable gray-scale image from few frequency domain coefficients. In this paper, different reweighted minimization models for MR image reconstruction are studied, and a novel model named reweighted wavelet+TV minimization model is proposed. By using split Bregman method, an iteration minimization algorithm for solving this new model is obtained, and its convergence is established. Numerical simulations show that the proposed model and its algorithm are feasible and highly efficient.
A game-theoretic multipath routing for video-streaming services over Mobile Ad Hoc Networks The number of portable devices capable of maintaining wireless communications has increased considerably in the last decade. Such mobile nodes may form a spontaneous self-configured network connected by wireless links to constitute a Mobile Ad Hoc Network (MANET). As the number of mobile end users grows the demand of multimedia services, such as video-streaming, in such networks is envisioned to increase as well. One of the most appropriate video coding technique for MANETs is layered MPEG-2 VBR, which used with a proper multipath routing scheme improves the distribution of video streams. In this article we introduce a proposal called g-MMDSR (game theoretic-Multipath Multimedia Dynamic Source Routing), a cross-layer multipath routing protocol which includes a game theoretic approach to achieve a dynamic selection of the forwarding paths. The proposal seeks to improve the own benefits of the users whilst using the common scarce resources efficiently. It takes into account the importance of the video frames in the decoding process, which outperforms the quality of the received video. Our scheme has proved to enhance the performance of the framework and the experience of the end users. Simulations have been carried out to show the benefits of our proposal under different situations where high interfering traffic and mobility of the nodes are present.
1.100788
0.101576
0.101576
0.101576
0.050793
0.000821
0.000164
0.000047
0.000012
0
0
0
0
0
Error Estimation In Clenshaw-Curtis Quadrature Formula
Convergence Properties Of Gaussian Quadrature-Formulas
Series Methods For Integration
Hybrid Gauss-Trapezoidal Quadrature Rules A new class of quadrature rules for the integration of both regular and singular functions is constructed and analyzed. For each rule the quadrature weights are positive and the class includes rules of arbitrarily high-order convergence. The quadratures result from alterations to the trapezoidal rule, in which a small number of nodes and weights at the ends of the integration interval are replaced. The new nodes and weights are determined so that the asymptotic expansion of the resulting rule, provided by a generalization of the Euler--Maclaurin summation formula, has a prescribed number of vanishing terms. The superior performance of the rules is demonstrated with numerical examples and application to several problems is discussed.
Implementing Clenshaw-Curtis quadrature, II computing the cosine transformation In a companion paper to this, “I Methodology and Experiences,” the automatic Clenshaw-Curtis quadrature scheme was described and how each quadrature formula used in the scheme requires a cosine transformation of the integrand values was shown. The high cost of these cosine transformations has been a serious drawback in using Clenshaw-Curtis quadrature. Two other problems related to the cosine transformation have also been troublesome. First, the conventional computation of the cosine transformation by recurrence relation is numerically unstable, particularly at the low frequencies which have the largest effect upon the integral. Second, in case the automatic scheme should require refinement of the sampling, storage is required to save the integrand values after the cosine transformation is computed.This second part of the paper shows how the cosine transformation can be computed by a modification of the fast Fourier transform and all three problems overcome. The modification is also applicable in other circumstances requiring cosine or sine transformations, such as polynomial interpolation through the Chebyshev points.
Filon-Clenshaw-Curtis rules for a class of highly-oscillatory integrals with logarithmic singularities. In this work we propose and analyse a numerical method for computing a family of highly oscillatory integrals with logarithmic singularities. For these quadrature rules we derive error estimates in terms of N, the number of nodes, k the rate of oscillations and a Sobolev-like regularity of the function. We prove that the method is not only robust but the error even decreases, for fixed N, as k increases. Practical issues about the implementation of the rule are also covered in this paper by: (a) writing down ready-to-implement algorithms; (b) analysing the numerical stability of the computations and (c) estimating the overall computational cost. We finish by showing some numerical experiments which illustrate the theoretical results presented in this paper.
A polynomial interpolation process at quasi-Chebyshev nodes with the FFT. Interpolation polynomial p(n) at the Chebyshev nodes cos pi j/n (0 <= j <= n) for smooth functions is known to converge fast as n -> infinity. The sequence {p(n)} is constructed recursively and efficiently in O(n log(2) n) flops for each p(n) by using the FFT, where n is increased geometrically, n = 2(i) (i = 2, 3, ... ), until an estimated error is within a given tolerance of epsilon. This sequence {2(j)}, however, grows too fast to get p(n) of proper n, often a much higher accuracy than epsilon being achieved. To cope with this problem we present quasi-Chebyshev nodes (QCN) at which {p(n)} can be constructed efficiently in the same order of flops as in the Chebyshev nodes by using the FFT, but with a increasing at a slower rate. We search for the optimum set in the QCN that minimizes the maximum error of {p(n)}. Numerical examples illustrate the error behavior of {p(n)} with the optimum nodes set obtained.
Remark on algorithm 659: Implementing Sobol's quasirandom sequence generator An algorithm to generate Sobol' sequences to approximate integrals in up to 40 dimensions has been previously given by Bratley and Fox in Algorithm 659. Here, we provide more primitive polynomials and "direction numbers" so as to allow the generation of Sobol' sequences to approximate integrals in up to 1111 dimensions. The direction numbers given generate Sobol' sequences that satisfy Sobol's so-called Property A.
FASTHENRY: a multipole-accelerated 3-D inductance extraction program In [1], ii was shown that an equaiion formulation based on mesh analysis can be combined with a GMRES-style iterative matrix solution technique to make a reasonably fast 3-D frequency dependent inductance and resistance extraction algorithm. Unfortunately, both the computation time and memory required for that approach grow faster than n2, where n is the number of volume-filaments. In this paper, we show that it is possible to use multipole-accele ration to reduce both required memory and computation time to nearly order n. Results from examples are given to demonstrate that the multipole acceleration can reduce required computation time and memory by more than an order of magnitude for realistic packaging problems.
A dynamically bi-orthogonal method for time-dependent stochastic partial differential equations I: Derivation and algorithms We propose a dynamically bi-orthogonal method (DyBO) to solve time dependent stochastic partial differential equations (SPDEs). The objective of our method is to exploit some intrinsic sparse structure in the stochastic solution by constructing the sparsest representation of the stochastic solution via a bi-orthogonal basis. It is well-known that the Karhunen-Loeve expansion (KLE) minimizes the total mean squared error and gives the sparsest representation of stochastic solutions. However, the computation of the KL expansion could be quite expensive since we need to form a covariance matrix and solve a large-scale eigenvalue problem. The main contribution of this paper is that we derive an equivalent system that governs the evolution of the spatial and stochastic basis in the KL expansion. Unlike other reduced model methods, our method constructs the reduced basis on-the-fly without the need to form the covariance matrix or to compute its eigendecomposition. In the first part of our paper, we introduce the derivation of the dynamically bi-orthogonal formulation for SPDEs, discuss several theoretical issues, such as the dynamic bi-orthogonality preservation and some preliminary error analysis of the DyBO method. We also give some numerical implementation details of the DyBO methods, including the representation of stochastic basis and techniques to deal with eigenvalue crossing. In the second part of our paper [11], we will present an adaptive strategy to dynamically remove or add modes, perform a detailed complexity analysis, and discuss various generalizations of this approach. An extensive range of numerical experiments will be provided in both parts to demonstrate the effectiveness of the DyBO method.
Matrix Completion from a Few Entries Let M be an nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(r n) observed entries with relative root mean square error
SAINTETIQ: a fuzzy set-based approach to database summarization In this paper, a new approach to database summarization is introduced through our model named SAINTETIQ. Based on a hierarchical conceptual clustering algorithm, SAINTETIQ incrementally builds a summary hierarchy from database records. Furthermore, the fuzzy set-based representation of data allows to handle vague, uncertain or imprecise information, as well as to improve accuracy and robustness of the construction process of summaries. Finally, background knowledge provides a user-defined vocabulary to synthesize and to make highly intelligible the summary descriptions.
Uniform fuzzy relations and fuzzy functions In this paper we introduce and study the concepts of a uniform fuzzy relation and a (partially) uniform F-function. We give various characterizations and constructions of uniform fuzzy relations and uniform F-functions, we show that the usual composition of fuzzy relations is not convenient for F-functions, so we introduce another kind of composition, and we establish a mutual correspondence between uniform F-functions and fuzzy equivalences. We also give some applications of uniform fuzzy relations in approximate reasoning, especially in fuzzy control, and we show that uniform fuzzy relations are closely related to the defuzzification problem.
Object recognition robust to imperfect depth data In this paper, we present an adaptive data fusion model that robustly integrates depth and image only perception. Combining dense depth measurements with images can greatly enhance the performance of many computer vision algorithms, yet degraded depth measurements (e.g., missing data) can also cause dramatic performance losses to levels below image-only algorithms. We propose a generic fusion model based on maximum likelihood estimates of fused image-depth functions for both available and missing depth data. We demonstrate its application to each step of a state-of-the-art image-only object instance recognition pipeline. The resulting approach shows increased recognition performance over alternative data fusion approaches.
1.075233
0.016993
0.016777
0.012561
0.004736
0.000379
0.00008
0.000011
0.000004
0.000001
0
0
0
0
Enhanced interval type-2 fuzzy c-means algorithm with improved initial center Uncertainties are common in the applications like pattern recognition, image processing, etc., while FCM algorithm is widely employed in such applications. However, FCM is not quite efficient to handle the uncertainties well. Interval type-2 fuzzy theory has been incorporated into FCM to improve the ability for handling uncertainties of these algorithms, but the complexity of algorithm will increase accordingly. In this paper an enhanced interval type-2 FCM algorithm is proposed in order to reduce these shortfalls. The initialization of cluster center and the process of type-reduction are optimized in this algorithm, which greatly reduce the calculation time of interval type-2 FCM and accelerate the convergence of the algorithm. Many simulations have been performed on random data clustering and image segmentation to show the validity of our proposed algorithm.
Type-2 Fuzzy Clustering and a Type-2 Fuzzy Inference Neural Network for the Prediction of Short-term Interest Rates. The following paper discusses the use of a hybrid model for the prediction of short-term US interest rates. The model consists of a differential evolution-based fuzzy type-2 clustering with a fuzzy type-2 inference neural network, after input preprocessing with multiple regression analysis. The model was applied to forecast the US 3- Month T-bill rates. Promising model performance was obtained as measured using root mean square error.
Dual-centers type-2 fuzzy clustering framework and its verification and validation indices The clustering model considers dual-centers rather than single centers.The dual-centers type-2 clustering model and algorithm are proposed.The relations among parameters of the proposed model are explained.The degrees of belonging to the clusters are defined by type-2 fuzzy numbers.The verification and verification indices are developed for model evaluation. In this paper we present a clustering framework for type-2 fuzzy clustering which covers all steps of the clustering process including: clustering algorithm, parameters estimation, and validation and verification indices. The proposed clustering algorithm is developed based on dual-centers type-2 fuzzy clustering model. In this model the centers of clusters are defined by a pair of objects rather than a single object. The membership values of the objects to the clusters are defined by type-2 fuzzy numbers and there are not any type reduction or defuzzification steps in the proposed clustering algorithm. In addition, the relation among the size of the cluster bandwidth, distance between dual-centers and fuzzifier parameter are indicated and analyzed to facilitate the parameters estimation step. To determine the optimum number of clusters, we develop a new validation index which is compatible with the proposed model structure. A new compatible verification index is also defined to compare the results of the proposed model with existing type-1 fuzzy clustering model. Finally, the results of computational experiments are presented to show the efficiency of the proposed approach.
A type-2 fuzzy c-regression clustering algorithm for Takagi-Sugeno system identification and its application in the steel industry This paper proposes a new type-2 fuzzy c-regression clustering algorithm for the structure identification phase of Takagi-Sugeno (T-S) systems. We present uncertainties with fuzzifier parameter ''m''. In order to identify the parameters of interval type-2 fuzzy sets, two fuzzifiers ''m"1'' and ''m"2'' are used. Then, by utilizing these two fuzzifiers in a fuzzy c-regression clustering algorithm, the interval type-2 fuzzy membership functions are generated. The proposed model in this paper is an extended version of a type-1 FCRM algorithm [25], which is extended to an interval type-2 fuzzy model. The Gaussian Mixture model is used to create the partition matrix of the fuzzy c-regression clustering algorithm. Finally, in order to validate the proposed model, several numerical examples are presented. The model is tested on a real data set from a steel company in Canada. Our computational results show that our model is more effective for robustness and error reduction than type-1 NFCRM and the multiple-regression.
Toward general type-2 fuzzy logic systems based on zSlices Higher order fuzzy logic systems (FLSs), such as interval type-2 FLSs, have been shown to be very well suited to deal with the high levels of uncertainties present in the majority of real-world applications. General type-2 FLSs are expected to further extend this capability. However, the immense computational complexities associated with general type-2 FLSs have, until recently, prevented their application to real-world control problems. This paper aims to address this problem by the introduction of a complete representation framework, which is referred to as zSlices-based general type-2 fuzzy systems. The proposed approach will lead to a significant reduction in both the complexity and the computational requirements for general type-2 FLSs, while it offers the capability to represent complex general type-2 fuzzy sets. As a proof-of-concept application, we have implemented a zSlicesbased general type-2 FLS for a two-wheeled mobile robot, which operates in a real-world outdoor environment. We have evaluated the computational performance of the zSlices-based general type- 2 FLS, which is suitable for multiprocessor execution. Finally, we have compared the performance of the zSlices-based general type-2 FLS against type-1 and interval type-2 FLSs, and a series of results is presented which is related to the different levels of uncertainty handled by the different types of FLSs.
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Dummynet: a simple approach to the evaluation of network protocols Network protocols are usually tested in operational networks or in simulated environments. With the former approach it is not easy to set and control the various operational parameters such as bandwidth, delays, queue sizes. Simulators are easier to control, but they are often only an approximate model of the desired setting, especially for what regards the various traffic generators (both producers and consumers) and their interaction with the protocol itself.In this paper we show how a simple, yet flexible and accurate network simulator - dummynet - can be built with minimal modifications to an existing protocol stack, allowing experiments to be run on a standalone system. dummynet works by intercepting communications of the protocol layer under test and simulating the effects of finite queues, bandwidth limitations and communication delays. It runs in a fully operational system, hence allowing the use of real traffic generators and protocol implementations, while solving the problem of simulating unusual environments. With our tool, doing experiments with network protocols is as simple as running the desired set of applications on a workstation.A FreeBSD implementation of dummynet, targeted to TCP, is available from the author. This implementation is highly portable and compatible with other BSD-derived systems, and takes less than 300 lines of kernel code.
Compressed Sensing for Networked Data Imagine a system with thousands or millions of independent components, all capable of generating and communicating data. A man-made system of this complexity was unthinkable a few decades ago, but today it is a reality - computers, cell phones, sensors, and actuators are all linked to the Internet, and every wired or wireless device is capable of generating and disseminating prodigious volumes of data. This system is not a single centrally-controlled device, rather it is an ever-growing patchwork of autonomous systems and components, perhaps more organic in nature than any human artifact that has come before. And we struggle to manage and understand this creation, which in many ways has taken on a life of its own. Indeed, several international conferences are dedicated to the scientific study of emergent Internet phenomena. This article considers a particularly salient aspect of this struggle that revolves around large- scale distributed sources of data and their storage, transmission, and retrieval. The task of transmitting information from one point to another is a common and well-understood exercise. But the problem of efficiently transmitting or sharing information from and among a vast number of distributed nodes remains a great challenge, primarily because we do not yet have well developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems. The problem is illustrated by a simple example. Consider a network of n nodes, each having a piece of information or data xj, j = 1,...,n. These data could be files to be shared, or simply scalar values corresponding to node attributes or sensor measurements. Let us assume that each xj is a scalar quantity for the sake of this illustration. Collectively these data x = (x1,...,xn)T, arranged in a vector, are called networked data to emphasize both the distributed nature of the data and the fact that they may be shared over the underlying communications infrastructure of the network. The networked data vector may be very large; n may be a thousand or a million or more.
Efficient approximation of random fields for numerical applications This article is dedicated to the rapid computation of separable expansions for the approximation of random fields. We consider approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. Especially, we provide an a posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples are provided to validate and quantify the presented methods. Copyright (c) 2015 John Wiley & Sons, Ltd.
Stereo image quality: effects of mixed spatio-temporal resolution We explored the response of the human visual system to mixed-resolution stereo video-sequences, in which one eye view was spatially or temporally low-pass filtered. It was expected that the perceived quality, depth, and sharpness would be relatively unaffected by low-pass filtering, compared to the case where both eyes viewed a filtered image. Subjects viewed two 10-second stereo video-sequences, in which the right-eye frames were filtered vertically (V) and horizontally (H) at 1/2 H, 1/2 V, 1/4 H, 1/4 V, 1/2 H 1/2 V, 1/2 H 1/4 V, 1/4 H 1/2 V, and 1/4 H 1/4 V resolution. Temporal filtering was implemented for a subset of these conditions at 1/2 temporal resolution, or with drop-and-repeat frames. Subjects rated the overall quality, sharpness, and overall sensation of depth. It was found that spatial filtering produced acceptable results: the overall sensation of depth was unaffected by low-pass filtering, while ratings of quality and of sharpness were strongly weighted towards the eye with the greater spatial resolution. By comparison, temporal filtering produced unacceptable results: field averaging and drop-and-repeat frame conditions yielded images with poor quality and sharpness, even though perceived depth was relatively unaffected. We conclude that spatial filtering of one channel of a stereo video-sequence may be an effective means of reducing the transmission bandwidth
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an $n$-dimensional vector “decouples” as $n$ scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
A model to perform knowledge-based temporal abstraction over multiple signals In this paper we propose the Multivariable Fuzzy Temporal Profile model (MFTP), which enables the projection of expert knowledge on a physical system over a computable description. This description may be used to perform automatic abstraction on a set of parameters that represent the temporal evolution of the system. This model is based on the constraint satisfaction problem (CSP)formalism, which enables an explicit representation of the knowledge, and on fuzzy set theory, from which it inherits the ability to model the imprecision and uncertainty that are characteristic of human knowledge vagueness. We also present an application of the MFTP model to the recognition of landmarks in mobile robotics, specifically to the detection of doors on ultrasound sensor signals from a Nomad 200 robot.
Split Bregman iterative algorithm for sparse reconstruction of electrical impedance tomography In this paper, we present an evaluation of the use of split Bregman iterative algorithm for the L"1-norm regularized inverse problem of electrical impedance tomography. Simulations are performed to validate that our algorithm is competitive in terms of the imaging quality and computational speed in comparison with several state-of-the-art algorithms. Results also indicate that in contrast to the conventional L"2-norm regularization method and total variation (TV) regularization method, the L"1-norm regularization method can sharpen the edges and is more robust against data noises.
1.2
0.2
0.2
0.04
0.010526
0
0
0
0
0
0
0
0
0
Integration of an Index to Preserve the Semantic Interpretability in the Multiobjective Evolutionary Rule Selection and Tuning of Linguistic Fuzzy Systems In this paper, we propose an index that helps preserve the semantic interpretability of linguistic fuzzy models while a tuning of the membership functions (MFs) is performed. The proposed index is the aggregation of three metrics that preserve the original meanings of the MFs as much as possible while a tuning of their definition parameters is performed. Additionally, rule-selection mechanisms can be used to reduce the model complexity, which involves another important interpretability aspect. To this end, we propose a postprocessing multiobjective evolutionary algorithm that performs rule selection and tuning of fuzzy-rule-based systems with three objectives: accuracy, semantic interpretability maximization, and complexity minimization. We tested our approach on nine real-world regression datasets. In order to analyze the interaction between the fuzzy-rule-selection approach and the tuning approach, these are also individually proved in a multiobjective framework and compared with their respective single-objective counterparts. We compared the different approaches by applying nonparametric statistical tests for pairwise and multiple comparisons, taking into consideration three representative points from the obtained Pareto fronts in the case of the multiobjective-based approaches. Results confirm the effectiveness of our approach, and a wide range of solutions is obtained, which are not only more interpretable but are also more accurate.
Looking for a good fuzzy system interpretability index: An experimental approach Interpretability is acknowledged as the main advantage of fuzzy systems and it should be given a main role in fuzzy modeling. Classical systems are viewed as black boxes because mathematical formulas set the mapping between inputs and outputs. On the contrary, fuzzy systems (if they are built regarding some constraints) can be seen as gray boxes in the sense that every element of the whole system can be checked and understood by a human being. Interpretability is essential for those applications with high human interaction, for instance decision support systems in fields like medicine, economics, etc. Since interpretability is not guaranteed by definition, a huge effort has been done to find out the basic constraints to be superimposed during the fuzzy modeling process. People talk a lot about interpretability but the real meaning is not clear. Understanding of fuzzy systems is a subjective task which strongly depends on the background (experience, preferences, and knowledge) of the person who makes the assessment. As a consequence, although there have been a few attempts to define interpretability indices, there is still not a universal index widely accepted. As part of this work, with the aim of evaluating the most used indices, an experimental analysis (in the form of a web poll) was carried out yielding some useful clues to keep in mind regarding interpretability assessment. Results extracted from the poll show the inherent subjectivity of the measure because we collected a huge diversity of answers completely different at first glance. However, it was possible to find out some interesting user profiles after comparing carefully all the answers. It can be concluded that defining a numerical index is not enough to get a widely accepted index. Moreover, it is necessary to define a fuzzy index easily adaptable to the context of each problem as well as to the user quality criteria.
SLAVE: a genetic learning system based on an iterative approach SLAVE is an inductive learning algorithm that uses concepts based on fuzzy logic theory. This theory has been shown to be a useful representational tool for improving the understanding of the knowledge obtained from a human point of view. Furthermore, SLAVE uses an iterative approach for learning based on the use of a genetic algorithm (GA) as a search algorithm. We propose a modification of the initial iterative approach used in SLAVE. The main idea is to include more information in the process of learning one individual rule. This information is included in the iterative approach through a different proposal of calculus of the positive and negative example to a rule. Furthermore, we propose the use of a new fitness function and additional genetic operators that reduce the time needed for learning and improve the understanding of the rules obtained
Semantic constraints for membership function optimization The optimization of fuzzy systems using bio-inspired strategies, such as neural network learning rules or evolutionary optimization techniques, is becoming more and more popular. In general, fuzzy systems optimized in such a way cannot provide a linguistic interpretation, preventing us from using one of their most interesting and useful features. This paper addresses this difficulty and points out a set of constraints that when used within an optimization scheme obviate the subjective task of interpreting membership functions. To achieve this a comprehensive set of semantic properties that membership functions should have is postulated and discussed. These properties are translated in terms of nonlinear constraints that are coded within a given optimization scheme, such as backpropagation. Implementation issues and one example illustrating the importance of the proposed constraints are included
Schopenhauer's Prolegomenon to Fuzziness “Prolegomenon” means something said in advance of something else. In this study, we posit that part of the work by Arthur Schopenhauer (1788–1860) can be thought of as a prolegomenon to the existing concept of “fuzziness.” His epistemic framework offers a comprehensive and surprisingly modern framework to study individual decision making and suggests a bridgeway from the Kantian program into the concept of fuzziness, which may have had its second prolegomenon in the work by Frege, Russell, Wittgenstein, Peirce and Black. In this context, Zadeh's seminal contribution can be regarded as the logical consequence of the Kant-Schopenhauer representation framework.
Openings And Closures Of Fuzzy Preorderings: Theoretical Basics And Applications To Fuzzy Rule-Based Systems The purpose of this paper is two-fold. Firstly, a general concept of closedness of fuzzy sets under fuzzy preorderings is proposed and investigated along with the corresponding opening and closure operators. Secondly, the practical impact of this notion is demonstrated by applying it to the analysis of ordering-based modifiers.
Hybrid intelligent systems for time series prediction using neural networks, fuzzy logic, and fractal theory In this paper, we describe a new method for the estimation of the fractal dimension of a geometrical object using fuzzy logic techniques. The fractal dimension is a mathematical concept, which measures the geometrical complexity of an object. The algorithms for estimating the fractal dimension calculate a numerical value using as data a time series for the specific problem. This numerical (crisp) value gives an idea of the complexity of the geometrical object (or time series). However, there is an underlying uncertainty in the estimation of the fractal dimension because we use only a sample of points of the object, and also because the numerical algorithms for the fractal dimension are not completely accurate. For this reason, we have proposed a new definition of the fractal dimension that incorporates the concept of a fuzzy set. This new definition can be considered a weaker definition (but more realistic) of the fractal dimension, and we have named this the "fuzzy fractal dimension." We can apply this new definition of the fractal dimension in conjunction with soft computing techniques for the problem of time series prediction. We have developed hybrid intelligent systems combining neural networks, fuzzy logic, and the fractal dimension, for the problem of time series prediction, and we have achieved very good results.
Sensed Signal Strength Forecasting for Wireless Sensors Using Interval Type-2 Fuzzy Logic System. In this paper, we present a new approach for sensed signal strength forecasting in wireless sensors using interval type-2 fuzzy logic system (FLS). We show that a type-2 fuzzy membership function, i.e., a Gaussian MF with uncertain mean is most appropriate to model the sensed signal strength of wireless sensors. We demonstrate that the sensed signals of wireless sensors are self-similar, which means it can be forecasted. An interval type-2 FLS is designed for sensed signal forecasting and is compared against a type-1 FLS. Simulation results show that the interval type-2 FLS performs much better than the type-1 FLS in sensed signal forecasting. This application can be further used for power on/off control in wireless sensors to save battery energy.
An Approach to Computing With Words Based on Canonical Characteristic Values of Linguistic Labels Herrera and Martinez initiated a 2-tuple fuzzy linguistic representation model for computing with words (CW), which offers a computationally feasible method for aggregating linguistic information (that are represented by linguistic variables with equidistant labels) through counting "indexes" of the corresponding linguistic labels. Lawry introduced an alternative approach to CW based on mass assignment theory that takes into account the underlying definitions of the words. Recently, we provided a new (proportional) 2-tuple fuzzy linguistic representation model for CW that is an extension of Herrera and Martinez's model and takes into account the underlying definitions of linguistic labels of linguistic variables in the process of aggregating linguistic information by assigning canonical characteristic values (CCVs) of the corresponding linguistic labels. In this paper, we study further into CW based on CCVs of linguistic labels to provide a unifying link between Lawry's framework and Herrera and Martinez's 2-tuple framework as well as allowing for computationally feasible CW. Our approach is based on a formal definition of CCV functions of a linguistic variable (which is introduced under the context of its proportional 2-tuple linguistic representation model as continuation of our earlier works) and on a group voting model that is for the probabilistic interpretation of the (whole) semantics of the ordered linguistic terms set of an arbitrary linguistic variable. After the general framework developed in the former part of this paper, we focus on a particular linguistic variable - probability. We show that, for a linguistic probability description, the expectation of its posterior conditional probability is a canonical characteristic value of the linguistic probability description. Then, a calculus for reasoning with linguistic syllogisms and inference from linguistic information is introduced. It is investigated under the context that linguistic quantifiers in su- ch linguistic syllogisms can be arbitrary linguistic probability description and that the related linguistic information can be linguistic facts, overfacts, or underfacts. Intrinsically, our approach to this calculus is with computing with words based on canonical characteristic values of linguistic labels.
Type-2 Fuzzy Sets for Pattern Classification: A Review This paper reviews the advances of type-2 fuzzy sets for pattern classification. The recent success of type-2 fuzzy sets has been largely attributed to their three-dimensional membership functions to handle more uncertainties in real-world problems. In pattern classification, both feature and hypothesis spaces have uncertainties, which motivate us of integrating type-2 fuzzy sets with traditional classifiers to achieve a better performance in terms of robustness, generalization ability, or classification rates. We describe recent type-2 fuzzy classifiers, from which we summarize a systematic approach to solve pattern classification problems. Finally, we discuss the trade-off between complexity and performance when using type-2 fuzzy classifiers, and explain the current difficulty of applying type-2 fuzzy sets to pattern classification
Sparse Signal Detection from Incoherent Projections The recently introduced theory of Compressed Sensing (CS) enables the reconstruction or approximation of sparse or compressible sig- nals from a small set of incoherent projections; often the number of projections can be much smaller than the number of Nyquist rate samples. In this paper, we show that the CS framework is informa- tion scalable to a wide range of statistical inference tasks. In partic- ular, we demonstrate how CS principles can solve signal detection problems given incoherent measurements without ever reconstruct- ing the signals involved. We specifically study the case of signal de- tection in strong inference and noise and propose an Incoherent De- tection and Estimation Algorithm (IDEA) based on Matching Pur- suit. The number of measurements and computations necessary for successful detection using IDEA is significantly lower than that nec- essary for successful reconstruction. Simulations show that IDEA is very resilient to strong interference, additive noise, and measurement quantization. When combined with random measurements, IDEA is applicable to a wide range of different signal classes.
New Components for Building Fuzzy Logic Circuits This paper presents two new designs of fuzzy logic circuit components. Currently due to the lack of fuzzy components, many fuzzy systems cannot be fully implemented in hardware. We propose the designs of a new fuzzy memory cell and a new fuzzy logic gate. Unlike a digital memory cell that can only store either a zero or a one, our fuzzy memory cell can store any value ranging from zero to one. The fuzzy memory cell can also be used as a D-type fuzzy flip-flop, which is the first design of a D-type fuzzy flip-flop. We also designed a new fuzzy NOT gate based only on digital NOT gates that can easily be implemented in CMOS microchips. Our D-type fuzzy flip-flop and fuzzy NOT gate together with fuzzy AND gate and fuzzy OR gate allow us to design and implement fuzzy logic circuits to fully exploit fuzzy paradigms in hardware.
Spatially-Localized Compressed Sensing and Routing in Multi-hop Sensor Networks We propose energy-efficient compressed sensing for wireless sensor networks using spatially-localized sparse projections. To keep the transmission cost for each measurement low, we obtain measurements from clusters of adjacent sensors. With localized projection, we show that joint reconstruction provides significantly better reconstruction than independent reconstruction. We also propose a metric of energy overlap between clusters and basis functions that allows us to characterize the gains of joint reconstruction for different basis functions. Compared with state of the art compressed sensing techniques for sensor network, our simulation results demonstrate significant gains in reconstruction accuracy and transmission cost.
Performance and Quality Evaluation of a Personalized Route Planning System Advanced personalization of database applications is a big challenge, in particular for distributed mo- bile environments. We present several new results from a prototype of a route planning system. We demonstrate how to combine qualitative and quantitative preferences gained from situational aspects and from personal user preferences. For performance studies we a nalyze the runtime efficiency of the SR-Combine algorithm used to evaluate top-k queries. By determining the cost-ratio of random to sorted accesses SR-Combine can automati- cally tune its performance within the given system architecture. Top-k queries are generated by mapping linguis- tic variables to numerical weightings. Moreover, we analyze the quality of the query results by several test se- ries, systematically varying the mappings of the linguistic variables. We report interesting insights into this rather under-researched important topic. More investigations, incorporating also cognitive issues, need to be conducted in the future.
1.106953
0.017967
0.017684
0.012264
0.002572
0.000762
0.000238
0.000063
0.000018
0.000002
0
0
0
0
Monte-carlo driven stochastic optimization framework for handling fabrication variability Increasing effects of fabrication variability have inspired a growing interest in statistical techniques for design optimization. In this work, we propose a Monte-Carlo driven stochastic optimization framework that does not rely on the distribution of the varying parameters (unlike most other existing techniques). Stochastic techniques like Successive Sample Mean Optimization (SSMO) and Stochastic Decomposition present a strong framework for solving linear programming formulations in which the parameters behave as random variables. We consider Binning-Yield Loss (BYL) as the optimization objective and show that we can get a provably optimal solution under a convex BYL function. We apply this framework for the MTCMOS sizing problem [21] using SSMO and Stochastic Decomposition techniques. The experimental results show that the solution obtained from stochastic decomposition based framework had 0% yield-loss, while the deterministic solution [21] had a 48% yield-loss.
A Framework for Scalable Postsilicon Statistical Delay Prediction Under Process Variations Due to increased variability trends in nanoscale integrated circuits, statistical circuit analysis and optimization has become essential. While statistical timing analysis has an important role to play in this process, it is equally important to develop die-specific delay prediction techniques using postsilicon measurements. We present a novel method for postsilicon delay analysis. We gather data from a small number of on-chip test structures, and combine this information with presilicon statistical timing analysis to obtain narrow die-specific timing probability density function (PDF). Experimental results show that for the benchmark suite being considered, taking all parameter variations into consideration, our approach can obtain a PDF whose standard deviation is 79.0% smaller, on average, than the statistical timing analysis result. The accuracy of the method defined by our metric is 99.6% compared to Monte Carlo simulation. The approach is scalable to smaller test structure overheads and can still produce acceptable results.
Variability Driven Gate Sizing for Binning Yield Optimization High performance applications are highly affected by process variations due to considerable spread in their expected frequencies after fabrication. Typically ldquobinningrdquo is applied to those chips that are not meeting their performance requirement after fabrication. Using binning, such failing chips are sold at a loss (e.g., proportional to the degree that they are failing their performance requirement). This paper discusses a gate-sizing algorithm to minimize ldquoyield-lossrdquo associated with binning. We propose a binning yield-loss function as a suitable objective to be minimized. We show this objective is convex with respect to the size variables and consequently can be optimally and efficiently solved. These contributions are yet made without making any specific assumptions about the sources of variability or how they are modeled. We show computation of the binning yield-loss can be done via any desired statistical static timing analysis (SSTA) tool. The proposed technique is compared with a recently proposed sensitivity-based statistical sizer, a deterministic sizer with worst-case variability estimate, and a deterministic sizer with relaxed area constraint. We show consistent improvement compared to the sensitivity-based approach in quality of solution (final binning yield-loss value) as well as huge run-time gain. Moreover, we show that a deterministic sizer with a relaxed area constraint will also result in reasonably good binning yield-loss values for the extra area overhead.
Statistical Timing Analysis: From Basic Principles to State of the Art Static-timing analysis (STA) has been one of the most pervasive and successful analysis engines in the design of digital circuits for the last 20 years. However, in recent years, the increased loss of predictability in semiconductor devices has raised concern over the ability of STA to effectively model statistical variations. This has resulted in extensive research in the so-called statistical STA (SSTA), which marks a significant departure from the traditional STA framework. In this paper, we review the recent developments in SSTA. We first discuss its underlying models and assumptions, then survey the major approaches, and close by discussing its remaining key challenges.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
A framework for accounting for process model uncertainty in statistical static timing analysis In recent years, a large body of statistical static timing analysis and statistical circuit optimization techniques have emerged, providing important avenues to account for the increasing process variations in design. The realization of these statistical methods often demands the availability of statistical process variation models whose accuracy, however, is severely hampered by limitations in test structure design, test time and various sources of inaccuracy inevitably incurred in process characterization. Consequently, it is desired that statistical circuit analysis and optimization can be conducted based upon imprecise statistical variation models. In this paper, we present an efficient importance sampling based optimization framework that can translate the uncertainty in the process models to the uncertainty in parametric yield, thus offering the very much desired statistical best/worst-case circuit analysis capability accounting for unavoidable complexity in process characterization. Unlike the previously proposed statistical learning and probabilistic interval based techniques, our new technique efficiently computes tight bounds of the parametric circuit yields based upon bounds of statistical process model parameters while fully capturing correlation between various process variations. Furthermore, our new technique provides valuable guidance to process characterization. Examples are included to demonstrate the application of our general analysis framework under the context of statistical static timing analysis.
Type-2 Fuzzy Decision Trees This paper presents type-2 fuzzy decision trees (T2FDTs) that employ type-2 fuzzy sets as values of attributes. A modified fuzzy double clustering algorithm is proposed as a method for generating type-2 fuzzy sets. This method allows to create T2FDTs that are easy to interpret and understand. To illustrate performace of the proposed T2FDTs and in order to compare them with results obtained for type-1 fuzzy decision trees (T1FDTs), two benchmark data sets, available on the internet, have been used.
Stable recovery of sparse overcomplete representations in the presence of noise Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
Statistical timing analysis for intra-die process variations with spatial correlations Process variations have become a critical issue in performance verification of high-performance designs. We present a new, statistical timing analysis method that accounts for inter- and intra-die process variations and their spatial correlations. Since statistical timing analysis has an exponential run time complexity, we propose a method whereby a statistical bound on the probability distribution function of the exact circuit delay is computed with linear run time. First, we develop a model for representing inter- and intra-die variations and their spatial correlations. Using this model, we then show how gate delays and arrival times can be represented as a sum of components, such that the correlation information between arrival times and gate delays is preserved. We then show how arrival times are propagated and merged in the circuit to obtain an arrival time distribution that is an upper bound on the distribution of the exact circuit delay. We prove the correctness of the bound and also show how the bound can be improved by propagating multiple arrival times. The proposed algorithms were implemented and tested on a set of benchmark circuits under several process variation scenarios. The results were compared with Monte Carlo simulation and show an accuracy of 3.32% on average over all test cases.
Ranking type-2 fuzzy numbers Type-2 fuzzy sets are a generalization of the ordinary fuzzy sets in which each type-2 fuzzy set is characterized by a fuzzy membership function. In this paper, we consider the problem of ranking a set of type-2 fuzzy numbers. We adopt a statistical viewpoint and interpret each type-2 fuzzy number as an ensemble of ordinary fuzzy numbers. This enables us to define a type-2 fuzzy rank and a type-2 rank uncertainty for each intuitionistic fuzzy number. We show the reasonableness of the results obtained by examining several test cases
Robust Regression and Lasso Lasso, or l1 regularized least squares, has been explored extensively for its remarkable sparsity properties. In this paper it is shown that the solution to Lasso, in addition to its sparsity, has robustness properties: it is the solution to a robust optimization problem. This has two important consequences. First, robustness provides a connection of the regularizer to a physical property, namely, protection from noise. This allows a principled selection of the regularizer, and in particular, generalizations of Lasso that also yield convex optimization problems are obtained by considering different uncertainty sets. Second, robustness can itself be used as an avenue for exploring different properties of the solution. In particular, it is shown that robustness of the solution explains why the solution is sparse. The analysis as well as the specific results obtained differ from standard sparsity results, providing different geometric intuition. Furthermore, it is shown that the robust optimization formulation is related to kernel density estimation, and based on this approach, a proof that Lasso is consistent is given, using robustness directly. Finally, a theorem is proved which states that sparsity and algorithmic stability contradict each other, and hence Lasso is not stable.
Using trapezoids for representing granular objects: Applications to learning and OWA aggregation We discuss the role and benefits of using trapezoidal representations of granular information. We focus on the use of level sets as a tool for implementing many operations on trapezoidal sets. We point out the simplification that the linearity of the trapezoid brings by requiring us to perform operations on only two level sets. We investigate the classic learning algorithm in the case when our observations are granule objects represented as trapezoidal fuzzy sets. An important issue that arises is the adverse effect that very uncertain observations have on the quality of our estimates. We suggest an approach to addressing this problem using the specificity of the observations to control its effect. We next consider the OWA aggregation of information represented as trapezoids. An important problem that arises here is the ordering of the trapezoidal fuzzy sets needed for the OWA aggregation. We consider three approaches to accomplish this ordering based on the location, specificity and fuzziness of the trapezoids. From these three different approaches three fundamental methods of ordering are developed. One based on the mean of the 0.5 level sets, another based on the length of the 0.5 level sets and a third based on the difference in lengths of the core and support level sets. Throughout this work particular emphasis is placed on the simplicity of working with trapezoids while still retaining a rich representational capability.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.066667
0.016667
0.004545
0
0
0
0
0
0
0
0
0
0
Statistical leakage power minimization using fast equi-slack shell based optimization Leakage power is becoming an increasingly important component of total chip power consumption for nanometer IC designs. Minimization of leakage power unavoidably enforces the consideration of the key sources of process variations, namely transistor channel length and threshold variations, since both have a significant impact on timing and leakage power. However, the statistical nature of chip performances often requires the use of expensive statistical analysis and optimization techniques in a leakage minimization task, contributing to high computational complexity. Further, the commonly used discrete cell libraries bring specific difficulty for design optimization and render pure continuous sizing and VT optimization algorithm suboptimal. In this paper, we present a fast yet effective approach to statistical leakage power reduction via gate sizing and multiple VT assignment. The proposed technique achieves the runtime efficiency via the use of the novel concept of equi-slack shells and performs fast leakage power reduction on the basis of shells while maintaining the timing yield. When combined with a finer grained gate-based post tuning step, the presented technique achieves Superior runtime efficiency while offering significant leakage power reduction.
Optimization objectives and models of variation for statistical gate sizing This paper approaches statistical optimization by examining gate delay variation models and optimization objectives. Most previous work on statistical optimization has focused exclusively on the optimization algorithms without considering the effects of the variation models and objective functions. This work empirically derives a simple variation model that is then used to optimize for robustness. Optimal results from example circuits used to study the effect of the statistical objective function on parametric yield.
Fast Estimation of Timing Yield Bounds for Process Variations With aggressive scaling down of feature sizes in VLSI fabrication, process variation has become a critical issue in designs. We show that two necessary conditions for the ldquomaxrdquo operation are actually not satisfied in the moment matching based statistical timing analysis approaches. We propose two correlation-aware block-based statistical timing analysis approaches that keep these necessary conditions, and show that our approaches always achieve the lower bound and the upper bound on the timing yield. Our approach combining with moment-matching based statistical static timing analysis (SSTA) approaches can efficiently estimate the maximal possible errors of moment-matching-based SSTA approaches.
A New Method for Design of Robust Digital Circuits As technology continues to scale beyond 100nm, there is a significant increase in performance uncertainty of CMOS logic due to process and environmental variations.Traditional circuit optimization methods assuming deterministic gate delays produce a flat "wall" of equally critical paths, resulting in variation-sensitive designs.This paper describes a new method for sizing of digital circuits, with uncertain gate delays, to minimize their performance variation leading to a higher parametric yield.The method is based on adding margins on each gate delay to account for variations and using a new "soft maximum" function to combine path delays at converging nodes.Using analytic models to predict the means and standard deviations of gate delays as posynomial functions of the device sizes, we create a simple, computationally efficient heuristic for uncertainty-aware sizing of digital circuits via Geometric programming.Monte-Carlo simulations on custom 32bit adders and ISCAS'85 benchmarks show that about 10% to 20% delay reduction over deterministic sizing methods can be achieved, without any additional cost in area.
Statistical Static Timing Analysis Considering Process Variation Model Uncertainty Increasing variability in modern manufacturing processes makes it important to predict the yields of chip designs at early design stage. In recent years, a number of statistical static timing analysis (SSTA) and statistical circuit optimization techniques have emerged to quickly estimate the design yield and perform robust optimization. These statistical methods often rely on the availability of statistical process variation models whose accuracy, however, is severely hampered by the limitations in test structure design, test time, and various sources of inaccuracy inevitably incurred in process characterization. To consider model characterization inaccuracy, we present an efficient importance sampling based optimization framework that can translate the uncertainty in process models to the uncertainty in circuit performance, thus offering the desired statistical best/worst case circuit analysis capability accounting for the unavoidable complexity in process characterization. Furthermore, our new technique provides valuable guidance to process characterization. Examples are included to demonstrate the application of our general analysis framework under the context of SSTA.
Statistical multilayer process space coverage for at-speed test Increasingly large process variations make selection of a set of critical paths for at-speed testing essential yet challenging. This paper proposes a novel multilayer process space coverage metric to quantitatively gauge the quality of path selection. To overcome the exponential complexity in computing such a metric, this paper reveals its relationship to a concept called order statistics for a set of correlated random variables, efficient computation of which is a hitherto open problem in the literature. This paper then develops an elegant recursive algorithm to compute the order statistics (or the metric) in provable linear time and space. With a novel data structure, the order statistics can also be incrementally updated. By employing a branch-and-bound path selection algorithm with above techniques, this paper shows that selecting an optimal set of paths for a multi-million-gate design can be performed efficiently. Compared to the state-of-the-art, experimental results show both the efficiency of our algorithms and better quality of our path selection.
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
Tensor rank is NP-complete We prove that computing the rank of a three-dimensional tensor over any finite field is NP-complete. Over the rational numbers the problem is NP-hard.
On multi-granular fuzzy linguistic modeling in group decision making problems: A systematic review and future trends. The multi-granular fuzzy linguistic modeling allows the use of several linguistic term sets in fuzzy linguistic modeling. This is quite useful when the problem involves several people with different knowledge levels since they could describe each item with different precision and they could need more than one linguistic term set. Multi-granular fuzzy linguistic modeling has been frequently used in group decision making field due to its capability of allowing each expert to express his/her preferences using his/her own linguistic term set. The aim of this research is to provide insights about the evolution of multi-granular fuzzy linguistic modeling approaches during the last years and discuss their drawbacks and advantages. A systematic literature review is proposed to achieve this goal. Additionally, some possible approaches that could improve the current multi-granular linguistic methodologies are presented.
Impact of interconnect variations on the clock skew of a gigahertz microprocessor Due to the large die sizes and tight relative clock skew margins, the impact of interconnect manufacturing variations on the clock skew in today's gigahertz microprocessors can no longer be ignored. Unlike manufacturing variations in the devices, the impact of the interconnect manufacturing variations on IC timing performance cannot be captured by worst/best case corner point methods. Thus it is difficult to estimate the clock skew variability due to interconnect variations. In this paper we analyze the timing impact of several key statistically independent interconnect variations in a context-dependent manner by applying a previously reported interconnect variational order-reduction technique. The results show that the interconnect variations can cause up to 25% clock skew variability in a modern microprocessor design.
On proactive perfectly secure message transmission This paper studies the interplay of network connectivity and perfectly secure message transmission under the corrupting influence of a Byzantine mobile adversary that may move from player to player but can corrupt no more than t players at any given time. It is known that, in the stationary adversary model where the adversary corrupts the same set of t players throughout the protocol, perfectly secure communication among any pair of players is possible if and only if the underlying synchronous network is (2t + 1)-connected. Surprisingly, we show that (2t + 1)-connectivity is sufficient (and of course, necessary) even in the proactive (mobile) setting where the adversary is allowed to corrupt different sets of t players in different rounds of the protocol. In other words, adversarial mobility has no effect on the possibility of secure communication. Towards this, we use the notion of a Communication Graph, which is useful in modelling scenarios with adversarial mobility. We also show that protocols for reliable and secure communication proposed in [15] can be modified to tolerate the mobile adversary. Further these protocols are round-optimal if the underlying network is a collection of disjoint paths from the sender S to receiver R.
Design of interval type-2 fuzzy models through optimal granularity allocation In this paper, we offer a new design methodology of type-2 fuzzy models whose intent is to effectively exploit the uncertainty of non-numeric membership functions. A new performance index, which guides the development of the fuzzy model, is used to navigate the construction of the fuzzy model. The underlying idea is that an optimal granularity allocation throughout the membership functions used in the fuzzy model leads to the best design. In contrast to the commonly utilized criterion where one strives for the highest accuracy of the model, the proposed index is formed in such a way so that the type-2 fuzzy model produced intervals, which ''cover'' the experimental data and at the same time are made as narrow (viz. specific) as possible. Genetic algorithm is proposed to automate the design process and further improve the results by carefully exploiting the search space. Experimental results show the efficiency of the proposed design methodology.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.111111
0.133333
0.066667
0.024242
0.022222
0.007407
0
0
0
0
0
0
0
0
A Hybrid HDMR for Mixed Multiscale Finite Element Methods with Application to Flows in Random Porous Media. Stochastic modeling has become a popular approach to quantifying uncertainty in flows through heterogeneous porous media. In this approach the uncertainty in the heterogeneous structure of material properties is often parametrized by a high-dimensional random variable, leading to a family of deterministic models. The numerical treatment of this stochastic model becomes very challenging as the dimension of the parameter space increases. To efficiently tackle the high-dimensionality, we propose a hybrid high-dimensional model representation (HDMR) technique, through which the high-dimensional stochastic model is decomposed into a moderate-dimensional stochastic model, in the most active random subspace, and a few one-dimensional stochastic models. The derived low-dimensional stochastic models are solved by incorporating the sparse-grid stochastic collocation method with the proposed hybrid HDMR. In addition, the properties of porous media, such as permeability, often display heterogeneous structure across multiple spatial scales. To treat this heterogeneity we use a mixed multiscale finite element method (MMsFEM). To capture the nonlocal spatial features (i.e., channelized structures) of the porous media and the important effects of random variables, we can hierarchically incorporate the global information individually from each of the random parameters. This significantly enhances the accuracy of the multiscale simulation. Thus, the synergy of the hybrid HDMR and the MMsFEM reduces the dimension of the flow model in both the stochastic and physical spaces, and hence significantly decreases the computational complexity. We analyze the proposed hybrid HDMR technique and the derived stochastic MMsFEM. Numerical experiments are carried out for two-phase flows in random porous media to demonstrate the efficiency and accuracy of the proposed hybrid HDMR with MMsFEM.
A Stochastic Mortar Mixed Finite Element Method for Flow in Porous Media with Multiple Rock Types This paper presents an efficient multiscale stochastic framework for uncertainty quantification in modeling of flow through porous media with multiple rock types. The governing equations are based on Darcy's law with nonstationary stochastic permeability represented as a sum of local Karhunen-Loève expansions. The approximation uses stochastic collocation on either a tensor product or a sparse grid, coupled with a domain decomposition algorithm known as the multiscale mortar mixed finite element method. The latter method requires solving a coarse scale mortar interface problem via an iterative procedure. The traditional implementation requires the solution of local fine scale linear systems on each iteration. We employ a recently developed modification of this method that precomputes a multiscale flux basis to avoid the need for subdomain solves on each iteration. In the stochastic setting, the basis is further reused over multiple realizations, leading to collocation algorithms that are more efficient than the traditional implementation by orders of magnitude. Error analysis and numerical experiments are presented.
Numerical Studies of Three-dimensional Stochastic Darcy's Equation and Stochastic Advection-Diffusion-Dispersion Equation Solute transport in randomly heterogeneous porous media is commonly described by stochastic flow and advection-dispersion equations with a random hydraulic conductivity field. The statistical distribution of conductivity of engineered and naturally occurring porous material can vary, depending on its origin. We describe solutions of a three-dimensional stochastic advection-dispersion equation using a probabilistic collocation method (PCM) on sparse grids for several distributions of hydraulic conductivity. Three random distributions of log hydraulic conductivity are considered: uniform, Gaussian, and truncated Gaussian (beta). Log hydraulic conductivity is represented by a Karhunen-Loève (K-L) decomposition as a second-order random process with an exponential covariance function. The convergence of PCM has been demonstrated. It appears that the accuracy in both the mean and the standard deviation of PCM solutions can be improved by using the Jacobi-chaos representing the truncated Gaussian distribution rather than the Hermite-chaos for the Gaussian distribution. The effect of type of distribution and parameters such as the variance and correlation length of log hydraulic conductivity and dispersion coefficient on leading moments of the advection velocity and solute concentration was investigated.
To Be or Not to Be Intrusive? The Solution of Parametric and Stochastic Equations - the "Plain Vanilla" Galerkin Case. In parametric equations-stochastic equations are a special case-one may want to approximate the solution such that it is easy to evaluate its dependence on the parameters. Interpolation in the parameters is an obvious possibility-in this context often labeled as a collocation method. In the frequent situation where one has a "solver" for a given fixed parameter value, this may be used "nonintrusively" as a black-box component to compute the solution at all the interpolation points independently of each other. By extension, all other methods, and especially simple Galerkin methods, which produce some kind of coupled system, are often classed as "intrusive." We show how, for such "plain vanilla" Galerkin formulations, one may solve the coupled system in a nonintrusive way, and even the simplest form of block-solver has comparable efficiency. This opens at least two avenues for possible speed-up: first, to benefit from the coupling in the iteration by using more sophisticated block-solvers and, second, the possibility of nonintrusive successive rank-one updates as in the proper generalized decomposition (PGD).
Multi-Element Generalized Polynomial Chaos for Arbitrary Probability Measures We develop a multi-element generalized polynomial chaos (ME-gPC) method for arbitrary probability measures and apply it to solve ordinary and partial differential equations with stochastic inputs. Given a stochastic input with an arbitrary probability measure, its random space is decomposed into smaller elements. Subsequently, in each element a new random variable with respect to a conditional probability density function (PDF) is defined, and a set of orthogonal polynomials in terms of this random variable is constructed numerically. Then, the generalized polynomial chaos (gPC) method is implemented element-by-element. Numerical experiments show that the cost for the construction of orthogonal polynomials is negligible compared to the total time cost. Efficiency and convergence of ME-gPC are studied numerically by considering some commonly used random variables. ME-gPC provides an efficient and flexible approach to solving differential equations with random inputs, especially for problems related to long-term integration, large perturbation, and stochastic discontinuities.
Variations, margins, and statistics Design margining is used to account for design uncertainties in the measurement of performance, and thereby ensures that actual manufactured parts will operate in within predicted bounds. As process and environmental variations become increasingly severe and complex in nanometer process technology, design margining overheads have increased correspondingly. This paper describes the types of process and environmental variations, their impact on performance, and the traditional design margining process used to account for these uncertainties. We consider statistical timing (SSTA) in the context of its ability to reduce timing margins through more accurate modeling of variations, and quantify potential benefits of SSTA for setup and hold time margin reduction. Combining SSTA with complementary techniques for systematic variation-aware and voltage-variation-aware timing provides meaningful design margin reduction. We introduce the concept of activity based operating condition as a supporting construct for variation-aware STA flows
NIST Net: a Linux-based network emulation tool Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.
Compressive wireless sensing General Terms Compressive Sampling is an emerging theory that is based on the fact that a relatively small number of random pro-jections of a signal can contain most of its salient informa-tion. In this paper, we introduce the concept of Compressive Wireless Sensing for sensor networks in which a fusion center retrieves signal field information from an ensemble of spa-tially distributed sensor nodes. Energy and bandwidth are scarce resources in sensor networks and the relevant metrics of interest in our context are 1) the latency involved in in-formation retrieval; and 2) the associated power-distortion trade-o. It is generally recognized that given su cient prior knowledge about the sensed data (e. g., statistical character-ization, homogeneity etc. ), there exist schemes that have very favorable power-distortion-latency trade-o s. We pro-pose a distributed matched source-channel communication scheme, based in part on recent results in compressive sam-pling theory, for estimation of sensed data at the fusion cen-ter and analyze, as a function of number of sensor nodes, the trade-o s between power, distortion and latency. Compres-sive wireless sensing is a universal scheme in the sense that it requires no prior knowledge about the sensed data. This universality, however, comes at the cost of optimality (in terms of a less favorable power-distortion-latency trade-o ) and we quantify this cost relative to the case when su cient prior information about the sensed data is assumed.
Analysis of the domain mapping method for elliptic diffusion problems on random domains. In this article, we provide a rigorous analysis of the solution to elliptic diffusion problems on random domains. In particular, based on the decay of the Karhunen-Loève expansion of the domain perturbation field, we establish decay rates for the derivatives of the random solution that are independent of the stochastic dimension. For the implementation of a related approximation scheme, like quasi-Monte Carlo quadrature, stochastic collocation, etc., we propose parametric finite elements to compute the solution of the diffusion problem on each individual realization of the domain generated by the perturbation field. This simplifies the implementation and yields a non-intrusive approach. Having this machinery at hand, we can easily transfer it to stochastic interface problems. The theoretical findings are complemented by numerical examples for both, stochastic interface problems and boundary value problems on random domains.
Coding Algorithms for 3DTV—A Survey Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development o- f algorithms.
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an $n$-dimensional vector “decouples” as $n$ scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Process variability-aware transient fault modeling and analysis Due to reduction in device feature size and supply voltage, the sensitivity of digital systems to transient faults is increasing dramatically. As technology scales further, the increase in transistor integration capacity also leads to the increase in process and environmental variations. Despite these difficulties, it is expected that systems remain reliable while delivering the required performance. Reliability and variability are emerging as new design challenges, thus pointing to the importance of modeling and analysis of transient faults and variation sources for the purpose of guiding the design process. This work presents a symbolic approach to modeling the effect of transient faults in digital circuits in the presence of variability due to process manufacturing. The results show that using a nominal case and not including variability effects, can underestimate the SER by 5% for the 50% yield point and by 10% for the 90% yield point.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.2
0.1
0.066667
0.066667
0.002667
0
0
0
0
0
0
0
0
0
An analysis of polynomial chaos approximations for modeling single-fluid-phase flow in porous medium systems We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method.
Uncertainty quantification and apportionment in air quality models using the polynomial chaos method Current air quality models generate deterministic forecasts by assuming perfect model, perfectly known parameters, and exact input data. However, our knowledge of the physics is imperfect. It is of interest to extend the deterministic simulation results with ''error bars'' that quantify the degree of uncertainty, and analyze the impact of the uncertainty input on the simulation results. This added information provides a confidence level for the forecast results. Monte Carlo (MC) method is a popular approach for air quality model uncertainty analysis, but it converges slowly. This work discusses the polynomial chaos (PC) method that is more suitable for uncertainty quantification (UQ) in large-scale models. We propose a new approach for uncertainty apportionment (UA), i.e., we develop a PC approach to attribute the uncertainties in model results to different uncertainty inputs. The UQ and UA techniques are implemented in the Sulfur Transport Eulerian Model (STEM-III). A typical scenario of air pollution in the northeast region of the USA is considered. The UQ and UA results allow us to assess the combined effects of different input uncertainties on the forecast uncertainty. They also enable to quantify the contribution of input uncertainties to the uncertainty in the predicted ozone and PAN concentrations.
Uncertainty Quantification and Weak Approximation of an Elliptic Inverse Problem We consider the inverse problem of determining the permeability from the pressure in a Darcy model of flow in a porous medium. Mathematically the problem is to find the diffusion coefficient for a linear uniformly elliptic partial differential equation in divergence form, in a bounded domain in dimension $d \le 3$, from measurements of the solution in the interior. We adopt a Bayesian approach to the problem. We place a prior random field measure on the log permeability, specified through the Karhunen-Loève expansion of its draws. We consider Gaussian measures constructed this way, and study the regularity of functions drawn from them. We also study the Lipschitz properties of the observation operator mapping the log permeability to the observations. Combining these regularity and continuity estimates, we show that the posterior measure is well defined on a suitable Banach space. Furthermore the posterior measure is shown to be Lipschitz with respect to the data in the Hellinger metric, giving rise to a form of well posedness of the inverse problem. Determining the posterior measure, given the data, solves the problem of uncertainty quantification for this inverse problem. In practice the posterior measure must be approximated in a finite dimensional space. We quantify the errors incurred by employing a truncated Karhunen-Loève expansion to represent this meausure. In particular we study weak convergence of a general class of locally Lipschitz functions of the log permeability, and apply this general theory to estimate errors in the posterior mean of the pressure and the pressure covariance, under refinement of the finite-dimensional Karhunen-Loève truncation.
Probabilistic models for stochastic elliptic partial differential equations Mathematical requirements that the random coefficients of stochastic elliptical partial differential equations must satisfy such that they have unique solutions have been studied extensively. Yet, additional constraints that these coefficients must satisfy to provide realistic representations for physical quantities, referred to as physical requirements, have not been examined systematically. It is shown that current models for random coefficients constructed solely by mathematical considerations can violate physical constraints and, consequently, be of limited practical use. We develop alternative models for the random coefficients of stochastic differential equations that satisfy both mathematical and physical constraints. Theoretical arguments are presented to show potential limitations of current models and establish properties of the models developed in this study. Numerical examples are used to illustrate the construction of the proposed models, assess the performance of these models, and demonstrate the sensitivity of the solutions of stochastic differential equations to probabilistic characteristics of their random coefficients.
A Kronecker Product Preconditioner for Stochastic Galerkin Finite Element Discretizations The discretization of linear partial differential equations with random data by means of the stochastic Galerkin finite element method results in general in a large coupled linear system of equations. Using the stochastic diffusion equation as a model problem, we introduce and study a symmetric positive definite Kronecker product preconditioner for the Galerkin matrix. We compare the popular mean-based preconditioner with the proposed preconditioner which—in contrast to the mean-based construction—makes use of the entire information contained in the Galerkin matrix. We report on results of test problems, where the random diffusion coefficient is given in terms of a truncated Karhunen-Loève expansion or is a lognormal random field.
Numerical Challenges in the Use of Polynomial Chaos Representations for Stochastic Processes This paper gives an overview of the use of polynomial chaos (PC) expansions to represent stochastic processes in numerical simulations. Several methods are presented for performing arithmetic on, as well as for evaluating polynomial and nonpolynomial functions of variables represented by PC expansions. These methods include {Taylor} series, a newly developed integration method, as well as a sampling-based spectral projection method for nonpolynomial function evaluations. A detailed analysis of the accuracy of the PC representations, and of the different methods for nonpolynomial function evaluations, is performed. It is found that the integration method offers a robust and accurate approach for evaluating nonpolynomial functions, even when very high-order information is present in the PC expansions.
High-Order Collocation Methods for Differential Equations with Random Inputs Recently there has been a growing interest in designing efficient methods for the solution of ordinary/partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling methods. However, when the governing equations take complicated forms, numerical implementations of stochastic Galerkin methods can become nontrivial and care is needed to design robust and efficient solvers for the resulting equations. On the other hand, the traditional sampling methods, e.g., Monte Carlo methods, are straightforward to implement, but they do not offer convergence as fast as stochastic Galerkin methods. In this paper, a high-order stochastic collocation approach is proposed. Similar to stochastic Galerkin methods, the collocation methods take advantage of an assumption of smoothness of the solution in random space to achieve fast convergence. However, the numerical implementation of stochastic collocation is trivial, as it requires only repetitive runs of an existing deterministic solver, similar to Monte Carlo methods. The computational cost of the collocation methods depends on the choice of the collocation points, and we present several feasible constructions. One particular choice, based on sparse grids, depends weakly on the dimensionality of the random space and is more suitable for highly accurate computations of practical applications with large dimensional random inputs. Numerical examples are presented to demonstrate the accuracy and efficiency of the stochastic collocation methods.
Numerical Studies of Three-dimensional Stochastic Darcy's Equation and Stochastic Advection-Diffusion-Dispersion Equation Solute transport in randomly heterogeneous porous media is commonly described by stochastic flow and advection-dispersion equations with a random hydraulic conductivity field. The statistical distribution of conductivity of engineered and naturally occurring porous material can vary, depending on its origin. We describe solutions of a three-dimensional stochastic advection-dispersion equation using a probabilistic collocation method (PCM) on sparse grids for several distributions of hydraulic conductivity. Three random distributions of log hydraulic conductivity are considered: uniform, Gaussian, and truncated Gaussian (beta). Log hydraulic conductivity is represented by a Karhunen-Loève (K-L) decomposition as a second-order random process with an exponential covariance function. The convergence of PCM has been demonstrated. It appears that the accuracy in both the mean and the standard deviation of PCM solutions can be improved by using the Jacobi-chaos representing the truncated Gaussian distribution rather than the Hermite-chaos for the Gaussian distribution. The effect of type of distribution and parameters such as the variance and correlation length of log hydraulic conductivity and dispersion coefficient on leading moments of the advection velocity and solute concentration was investigated.
An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.
Linearized Bregman iterations for compressed sensing Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One such application is compressed sensing, where an efficient and robust-to-noise algorithm to find a minimal l(1) norm solution is needed. This means that the algorithm should be tailored for large scale and completely dense matrices A, while An and A(T)U can be computed by fast transforms and the solution we seek is sparse. Recently, a simple and fast algorithm based on linearized Bregman iteration was proposed in [28, 32] for this purpose. This paper is to analyze the convergence of linearized Bregman iterations and the minimization properties of their limit. Based on our analysis here, we derive also a new algorithm that is proven to be convergent with a rate. Furthermore, the new algorithm is simple and fast in approximating a minimal l(1) norm solution of An = f as shown by numerical simulations. Hence, it can be used as another choice of an efficient tool in compressed sensing.
Multiple description coding: compression meets the network This article focuses on the compressed representations of pictures. The representation does not affect how many bits get from the Web server to the laptop, but it determines the usefulness of the bits that arrive. Many different representations are possible, and there is more involved in their choice than merely selecting a compression ratio. The techniques presented represent a single information...
Stanford Peer-to-Peer Multicast (SPPM) - Overview and recent extensions We review the Stanford peer-to-peer multicast (SPPM) protocol for live video streaming and report recent extensions. SPPM has been designed for low latency and robust transmission of live media by organizing peers within multiple complementary trees. The recent extensions to live streaming are time-shifted streaming, interactive region-of-interest (IRoI) streaming, and streaming to mobile devices. With time-shifting, users can choose an arbitrary beginning point for watching a stream, whereas IRoI streaming allows users to select an arbitrary region to watch within a high-spatial-resolution scene. We extend the live streaming to mobile devices by addressing challenges due to heterogeneous displays, connection speeds, and decoding capabilities.
Construction of interval-valued fuzzy preference relations from ignorance functions and fuzzy preference relations. Application to decision making This paper presents a method to construct an interval-valued fuzzy set from a fuzzy set and the representation of the lack of knowledge or ignorance that experts are subject to when they define the membership values of the elements to that fuzzy set. With this construction method, it is proved that membership intervals of equal length to the ignorance associated to the elements are obtained when the product t-norm and the probabilistic sum t-conorm are used. The construction method is applied to build interval-valued fuzzy preference relations (IVFRs) from given fuzzy preference relations (FRs). Afterwards, a general algorithm to solve decision making problems using IVFRs is proposed. The decision making algorithm implements different selection processes of alternatives where the order used to choose alternatives is a key factor. For this reason, different admissible orders between intervals are analysed. Finally, OWA operators with interval weights are analysed and a method to obtain those weights from real-valued weights is proposed.
Stochastic approximation learning for mixtures of multivariate elliptical distributions Most of the current approaches to mixture modeling consider mixture components from a few families of probability distributions, in particular from the Gaussian family. The reasons of these preferences can be traced to their training algorithms, typically versions of the Expectation-Maximization (EM) method. The re-estimation equations needed by this method become very complex as the mixture components depart from the simplest cases. Here we propose to use a stochastic approximation method for probabilistic mixture learning. Under this method it is straightforward to train mixtures composed by a wide range of mixture components from different families. Hence, it is a flexible alternative for mixture learning. Experimental results are presented to show the probability density and missing value estimation capabilities of our proposal.
1.05901
0.048809
0.03297
0.013802
0.004847
0.000787
0.000202
0.000079
0.00001
0
0
0
0
0
A granular extension of the fuzzy-ARTMAP (FAM) neural classifier based on fuzzy lattice reasoning (FLR) The fuzzy lattice reasoning (FLR) classifier was introduced lately as an advantageous enhancement of the fuzzy-ARTMAP (FAM) neural classifier in the Euclidean space R^N. This work extends FLR to space F^N, where F is the granular data domain of fuzzy interval numbers (FINs) including (fuzzy) numbers, intervals, and cumulative distribution functions. Based on a fundamentally improved mathematical notation this work proposes novel techniques for dealing, rigorously, with imprecision in practice. We demonstrate a favorable comparison of our proposed techniques with alternative techniques from the literature in an industrial prediction application involving digital images represented by histograms. Additional advantages of our techniques include a capacity to represent statistics of all orders by a FIN, an introduction of tunable (sigmoid) nonlinearities, a capacity for effective data processing without any data normalization, an induction of descriptive decision-making knowledge (rules) from the training data, and the potential for input variable selection.
Piecewise-linear approximation of non-linear models based on probabilistically/possibilistically interpreted intervals' numbers (INs) Linear models are preferable due to simplicity. Nevertheless, non-linear models often emerge in practice. A popular approach for modeling nonlinearities is by piecewise-linear approximation. Inspired from fuzzy inference systems (FISs) of Tagaki-Sugeno-Kang (TSK) type as well as from Kohonen's self-organizing map (KSOM) this work introduces a genetically optimized synergy based on intervals' numbers, or INs for short. The latter (INs) are interpreted here either probabilistically or possibilistically. The employment of mathematical lattice theory is instrumental. Advantages include accommodation of granular data, introduction of tunable nonlinearities, and induction of descriptive decision-making knowledge (rules) from the data. Both efficiency and effectiveness are demonstrated in three benchmark problems. The proposed computational method demonstrates invariably a better capacity for generalization; moreover, it learns orders-of-magnitude faster than alternative methods inducing clearly fewer rules.
Fuzzy inference based on families of α-level sets A fuzzy-inference method in which fuzzy sets are defined by the families of their α-level sets, based on the resolution identity theorem, is proposed. It has the following advantages over conventional methods: (1) it studies the characteristics of fuzzy inference, in particular the input-output relations of fuzzy inference; (2) it provides fast inference operations and requires less memory capacity; (3) it easily interfaces with two-valued logic; and (4) it effectively matches with systems that include fuzzy-set operations based on the extension principle. Fuzzy sets defined by the families of their α-level sets are compared with those defined by membership functions in terms of processing time and required memory capacity in fuzzy logic operations. The fuzzy inference method is then derived, and important propositions of fuzzy-inference operations are proved. Some examples of inference by the proposed method are presented, and fuzzy-inference characteristics and computational efficiency for α-level-set-based fuzzy inference are considered
On similarity and inclusion measures between type-2 fuzzy sets with an application to clustering In this paper we define similarity and inclusion measures between type-2 fuzzy sets. We then discuss their properties and also consider the relationships between them. Several examples are used to present the calculation of these similarity and inclusion measures between type-2 fuzzy sets. We finally combine the proposed similarity measures with Yang and Shih's [M.S. Yang, H.M. Shih, Cluster analysis based on fuzzy relations, Fuzzy Sets and Systems 120 (2001) 197-212] algorithm as a clustering method for type-2 fuzzy data. These clustering results are compared with Hung and Yang's [W.L. Hung, M.S. Yang, Similarity measures between type-2 fuzzy sets, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 12 (2004) 827-841] results. According to different @a-level, these clustering results consist of a better hierarchical tree.
Advances and challenges in interval-valued fuzzy logic Among the various extensions to the common [0,1]-valued truth degrees of ''traditional'' fuzzy set theory, closed intervals of[0,1] stand out as a particularly appealing and promising choice for representing imperfect information, nicely accommodating and combining the facets of vagueness and uncertainty without paying too much in terms of computational complexity. From a logical point of view, due to the failure of the omnipresent prelinearity condition, the underlying algebraic structure L^I falls outside the mainstream of the research on formal fuzzy logics (including MV-, BL- and MTL-algebras), and consequently so far has received only marginal attention. This comparative lack of interest for interval-valued fuzzy logic has been further strengthened, perhaps, by taking for granted that its algebraic operations amount to a twofold application of corresponding operations on the unit interval. Abandoning that simplifying assumption, however, we may find that L^I reveals itself as a very rich and noteworthy structure allowing the construction of complex and surprisingly well-behaved logical systems. Reviewing the main advances on the algebraic characterization of logical operations on L^I, and relating these results to the familiar completeness questions (which remain as major challenges) for the associated formal fuzzy logics, this paper paves the way for a systematic study of interval-valued fuzzy logic in the narrow sense.
Type-2 Fuzzy Logic Controller Design for Buck DC-DC Converters. Type-1 fuzzy logic controllers (T1FLCs) have been successfully developed and used in various applications. The experience and knowledge of human experts are needed to decide both the membership functions and the fuzzy rules. However, in the real-time applications, uncertainty associated with the available information always happens. This paper proposes a type-2 fuzzy logic control (T2FLC), which involves the fuzzifier, rule base, fuzzy inference engine, and output processor with type reduction and defuzzifier. Because the antecedent and/or consequent membership functions of the T2FLC are type-2 fuzzy sets, the T2FLC can handle rule uncertainties when the operation is extremely uncertain and/or the engineers cannot exactly determine the membership grades. Furthermore, the proposed T2FLC is applied to a buck DC-DC converter control. Experimental results show that the proposed T2FLC is robust against input voltage and load resistance variations for the converter control.
MPEG VBR video traffic modeling and classification using fuzzy technique We present an approach for MPEG variable bit rate (VBR) video modeling and classification using fuzzy techniques. We demonstrate that a type-2 fuzzy membership function, i.e., a Gaussian MF with uncertain variance, is most appropriate to model the log-value of I/P/B frame sizes in MPEG VBR video. The fuzzy c-means (FCM) method is used to obtain the mean and standard deviation (std) of T/P/B frame sizes when the frame category is unknown. We propose to use type-2 fuzzy logic classifiers (FLCs) to classify video traffic using compressed data. Five fuzzy classifiers and a Bayesian classifier are designed for video traffic classification, and the fuzzy classifiers are compared against the Bayesian classifier. Simulation results show that a type-2 fuzzy classifier in which the input is modeled as a type-2 fuzzy set and antecedent membership functions are modeled as type-2 fuzzy sets performs the best of the five classifiers when the testing video product is not included in the training products and a steepest descent algorithm is used to tune its parameters
Detecting Faces in Images: A Survey Images containing faces are essential to intelligent vision-based human computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation, and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face regardless of its three-dimensional position, orientation, and lighting conditions. Such a problem is challenging because faces are nonrigid and have a high degree of variability in size, shape, color, and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics, and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for future research.
The optimality conditions for optimization problems with convex constraints and multiple fuzzy-valued objective functions The optimality conditions for multiobjective programming problems with fuzzy-valued objective functions are derived in this paper. The solution concepts for these kinds of problems will follow the concept of nondominated solution adopted in the multiobjective programming problems. In order to consider the differentiation of fuzzy-valued functions, we invoke the Hausdorff metric to define the distance between two fuzzy numbers and the Hukuhara difference to define the difference of two fuzzy numbers. Under these settings, the optimality conditions for obtaining the (strongly, weakly) Pareto optimal solutions are elicited naturally by introducing the Lagrange multipliers.
Unified full implication algorithms of fuzzy reasoning This paper discusses the full implication inference of fuzzy reasoning. For all residuated implications induced by left continuous t-norms, unified @a-triple I algorithms are constructed to generalize the known results. As the corollaries of the main results of this paper, some special algorithms can be easily derived based on four important residuated implications. These algorithms would be beneficial to applications of fuzzy reasoning. Based on properties of residuated implications, the proofs of the many conclusions are greatly simplified.
Compressive sensing for sparsely excited speech signals Compressive sensing (CS) has been proposed for signals with sparsity in a linear transform domain. We explore a signal dependent unknown linear transform, namely the impulse response matrix operating on a sparse excitation, as in the linear model of speech production, for recovering compressive sensed speech. Since the linear transform is signal dependent and unknown, unlike the standard CS formulation, a codebook of transfer functions is proposed in a matching pursuit (MP) framework for CS recovery. It is found that MP is efficient and effective to recover CS encoded speech as well as jointly estimate the linear model. Moderate number of CS measurements and low order sparsity estimate will result in MP converge to the same linear transform as direct VQ of the LP vector derived from the original signal. There is also high positive correlation between signal domain approximation and CS measurement domain approximation for a large variety of speech spectra.
Accurate and efficient gate-level parametric yield estimation considering correlated variations in leakage power and performance Increasing levels of process variation in current technologies have a major impact on power and performance, and result in parametric yield loss. In this work we develop an efficient gate-level approach to accurately estimate the parametric yield defined by leakage power and delay constraints, by finding the joint probability distribution function (jpdf) for delay and leakage power. We consider inter-die variations as well as intra-die variations with correlated and random components. The correlation between power and performance arise due to their dependence on common process parameters and is shown to have a significant impact on yield in high-frequency bins. We also propose a method to estimate parametric yield given the power/delay jpdf that is much faster than numerical integration with good accuracy. The proposed approach is implemented and compared with Monte Carlo simulations and shows high accuracy, with the yield estimates achieving an average error of 2%.
Fuzzy modeling of system behavior for risk and reliability analysis The main objective of the article is to permit the reliability analyst's/engineers/managers/practitioners to analyze the failure behavior of a system in a more consistent and logical manner. To this effect, the authors propose a methodological and structured framework, which makes use of both qualitative and quantitative techniques for risk and reliability analysis of the system. The framework has been applied to model and analyze a complex industrial system from a paper mill. In the quantitative framework, after developing the Petrinet model of the system, the fuzzy synthesis of failure and repair data (using fuzzy arithmetic operations) has been done. Various system parameters of managerial importance such as repair time, failure rate, mean time between failures, availability, and expected number of failures are computed to quantify the behavior in terms of fuzzy, crisp and defuzzified values. Further, to improve upon the reliability and maintainability characteristics of the system, in depth qualitative analysis of systems is carried out using failure mode and effect analysis (FMEA) by listing out all possible failure modes, their causes and effect on system performance. To address the limitations of traditional FMEA method based on risky priority number score, a risk ranking approach based on fuzzy and Grey relational analysis is proposed to prioritize failure causes.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.068889
0.066667
0.037222
0.007407
0.002529
0.0001
0.000009
0
0
0
0
0
0
0
Adaptive Smoothing: A General Tool for Early Vision A method to smooth a signal while preserving discontinuities is presented. This is achieved by repeatedly convolving the signal with a very small averaging mask weighted by a measure of the signal continuity at each point. Edge detection can be performed after a few iterations, and features extracted from the smoothed signal are correctly localized (hence, no tracking is needed). This last property allows the derivation of a scale-space representation of a signal using the adaptive smoothing parameter k as the scale dimension. The relation of this process to anisotropic diffusion is shown. A scheme to preserve higher-order discontinuities and results on range images is proposed. Different implementations of adaptive smoothing are presented, first on a serial machine, for which a multigrid algorithm is proposed to speed up the smoothing effect, then on a single instruction multiple data (SIMD) parallel machine such as the Connection Machine. Various applications of adaptive smoothing such as edge detection, range image feature extraction, corner detection, and stereo matching are discussed.
Minimum error thresholding A computationally efficient solution to the problem of minimum error thresholding is derived under the assumption of object and pixel grey level values being normally distributed. The method is applicable in multithreshold selection.
Segmentation and estimation of image region properties through cooperative hierarchical computation The task of segmenting an image and that of estimating properties of image regions may be highly interdependent. The goal of segmentation is to partition the image into regions with more or less homogeneous properties; but the processes which estimate these properties should be confined within individual regions. A cooperative, iterative approach to segmentation and property estimation is defined; the results of each process at a given iteration are used to adjust the other process at the next iteration. A linked pyramid structure provides a framework for this process iteration. This hierarchical structure ensures rapid convergence even with strictly local communication between pyramid nodes.
Edge detection using two-dimensional local structure information Local intensity discontinuities, commonly referred to as edges, are important attributes of an image. Many imaging scenarios produce image regions exhibiting complex two-dimensional (2D) local structure, such as when several edges meet to form corners and vertices. Traditional derivative-based edge operators, which typically assume that an edge can be modeled as a one-dimensional (1D) piecewise smooth step function, give misleading results in such situations. Leclerc and Zucker introduced the concept of local structure as an aid for locating intensity discontinuities. They proposed a detailed procedure for detecting discontinuities in a 1D function. They had only given a preliminary version of their scheme, however, for 2D images. Three related edge-detection methods are proposed that draw upon 2D local structural information. The first method greatly expands upon Leclerc and Zucker's 2D method. The other two methods employ a mechanism similar to that used by the maximum-homogeneity filter (a filter used for image enhancement). All three methods permit the detection of multiple edges at a point and have the flexibility to detect edges at differing spatial and angular acuity. Results show that the methods typically perform better than other operators.
Soft clustering of multidimensional data: a semi-fuzzy approach This paper discusses new approaches to unsupervised fuzzy classification of multidimensional data. In the developed clustering models, patterns are considered to belong to some but not necessarily all clusters. Accordingly, such algorithms are called ‘semi-fuzzy’ or ‘soft’ clustering techniques. Several models to achieve this goal are investigated and corresponding implementation algorithms are developed. Experimental results are reported.
Variations, margins, and statistics Design margining is used to account for design uncertainties in the measurement of performance, and thereby ensures that actual manufactured parts will operate in within predicted bounds. As process and environmental variations become increasingly severe and complex in nanometer process technology, design margining overheads have increased correspondingly. This paper describes the types of process and environmental variations, their impact on performance, and the traditional design margining process used to account for these uncertainties. We consider statistical timing (SSTA) in the context of its ability to reduce timing margins through more accurate modeling of variations, and quantify potential benefits of SSTA for setup and hold time margin reduction. Combining SSTA with complementary techniques for systematic variation-aware and voltage-variation-aware timing provides meaningful design margin reduction. We introduce the concept of activity based operating condition as a supporting construct for variation-aware STA flows
NIST Net: a Linux-based network emulation tool Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.
A training algorithm for optimal margin classifiers A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
A review on spectrum sensing for cognitive radio: challenges and solutions Cognitive radio is widely expected to be the next Big Bang in wireless communications. Spectrum sensing, that is, detecting the presence of the primary users in a licensed spectrum, is a fundamental problem for cognitive radio. As a result, spectrum sensing has reborn as a very active research area in recent years despite its long history. In this paper, spectrum sensing techniques from the optimal likelihood ratio test to energy detection, matched filtering detection, cyclostationary detection, eigenvalue-based sensing, joint space-time sensing, and robust sensing methods are reviewed. Cooperative spectrum sensing with multiple receivers is also discussed. Special attention is paid to sensing methods that need little prior information on the source signal and the propagation channel. Practical challenges such as noise power uncertainty are discussed and possible solutions are provided. Theoretical analysis on the test statistic distribution and threshold setting is also investigated.
A simple Cooperative diversity method based on network path selection Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this "best" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.
Using polynomial chaos to compute the influence of multiple random surfers in the PageRank model The PageRank equation computes the importance of pages in a web graph relative to a single random surfer with a constant teleportation coefficient. To be globally relevant, the teleportation coefficient should account for the influence of all users. Therefore, we correct the PageRank formulation by modeling the teleportation coefficient as a random variable distributed according to user behavior. With this correction, the PageRank values themselves become random. We present two methods to quantify the uncertainty in the random PageRank: a Monte Carlo sampling algorithm and an algorithm based the truncated polynomial chaos expansion of the random quantities. With each of these methods, we compute the expectation and standard deviation of the PageRanks. Our statistical analysis shows that the standard deviation of the PageRanks are uncorrelated with the PageRank vector.
Practical RDF schema reasoning with annotated semantic web data Semantic Web data with annotations is becoming available, being YAGO knowledge base a prominent example. In this paper we present an approach to perform the closure of large RDF Schema annotated semantic web data using standard database technology. In particular, we exploit several alternatives to address the problem of computing transitive closure with real fuzzy semantic data extracted from YAGO in the PostgreSQL database management system. We benchmark the several alternatives and compare to classical RDF Schema reasoning, providing the first implementation of annotated RDF schema in persistent storage.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2112
0.2112
0.2112
0.2112
0.1056
0
0
0
0
0
0
0
0
0
Automatically Generated Linguistic Summaries of Energy Consumption Data In this paper a method is described to automatically generate linguistic summaries of real world time series data provided by a utility company. The methodology involves the following main steps: partitioning of time series into fuzzy intervals, calculation of statistical indicators for the partitions, generation of summarising sentences and determination of the truth-fullness of these sentences, and finally selection of relevant sentences from the generated set of sentences.
Perception-based approach to time series data mining Time series data mining (TSDM) techniques permit exploring large amounts of time series data in search of consistent patterns and/or interesting relationships between variables. TSDM is becoming increasingly important as a knowledge management tool where it is expected to reveal knowledge structures that can guide decision making in conditions of limited certainty. Human decision making in problems related with analysis of time series databases is usually based on perceptions like ''end of the day'', ''high temperature'', ''quickly increasing'', ''possible'', etc. Though many effective algorithms of TSDM have been developed, the integration of TSDM algorithms with human decision making procedures is still an open problem. In this paper, we consider architecture of perception-based decision making system in time series databases domains integrating perception-based TSDM, computing with words and perceptions, and expert knowledge. The new tasks which should be solved by the perception-based TSDM methods to enable their integration in such systems are discussed. These tasks include: precisiation of perceptions, shape pattern identification, and pattern retranslation. We show how different methods developed so far in TSDM for manipulation of perception-based information can be used for development of a fuzzy perception-based TSDM approach. This approach is grounded in computing with words and perceptions permitting to formalize human perception-based inference mechanisms. The discussion is illustrated by examples from economics, finance, meteorology, medicine, etc.
Imprecision Measures for Type-2 Fuzzy Sets: Applications to Linguistic Summarization of Databases The paper proposes new definitions of (im)precision measures for type-2 fuzzy sets representing linguistic terms and linguistically quantified statements. The proposed imprecision measures extend similar concepts for traditional (type-1) fuzzy sets, cf. [1,2]. Applications of those new concepts to linguistic summarization of data are proposed in the context of the problem statement of finding the best summaries.
Interpretability assessment of fuzzy knowledge bases: A cointension based approach Computing with words (CWW) relies on linguistic representation of knowledge that is processed by operating at the semantical level defined through fuzzy sets. Linguistic representation of knowledge is a major issue when fuzzy rule based models are acquired from data by some form of empirical learning. Indeed, these models are often requested to exhibit interpretability, which is normally evaluated in terms of structural features, such as rule complexity, properties on fuzzy sets and partitions. In this paper we propose a different approach for evaluating interpretability that is based on the notion of cointension. The interpretability of a fuzzy rule-based model is measured in terms of cointension degree between the explicit semantics, defined by the formal parameter settings of the model, and the implicit semantics conveyed to the reader by the linguistic representation of knowledge. Implicit semantics calls for a representation of user's knowledge which is difficult to externalise. Nevertheless, we identify a set of properties - which we call ''logical view'' - that is expected to hold in the implicit semantics and is used in our approach to evaluate the cointension between explicit and implicit semantics. In practice, a new fuzzy rule base is obtained by minimising the fuzzy rule base through logical properties. Semantic comparison is made by evaluating the performances of the two rule bases, which are supposed to be similar when the two semantics are almost equivalent. If this is the case, we deduce that the logical view is applicable to the model, which can be tagged as interpretable from the cointension viewpoint. These ideas are then used to define a strategy for assessing interpretability of fuzzy rule-based classifiers (FRBCs). The strategy has been evaluated on a set of pre-existent FRBCs, acquired by different learning processes from a well-known benchmark dataset. Our analysis highlighted that some of them are not cointensive with user's knowledge, hence their linguistic representation is not appropriate, even though they can be tagged as interpretable from a structural point of view.
SAINTETIQ: a fuzzy set-based approach to database summarization In this paper, a new approach to database summarization is introduced through our model named SAINTETIQ. Based on a hierarchical conceptual clustering algorithm, SAINTETIQ incrementally builds a summary hierarchy from database records. Furthermore, the fuzzy set-based representation of data allows to handle vague, uncertain or imprecise information, as well as to improve accuracy and robustness of the construction process of summaries. Finally, background knowledge provides a user-defined vocabulary to synthesize and to make highly intelligible the summary descriptions.
Detecting Faces in Images: A Survey Images containing faces are essential to intelligent vision-based human computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation, and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face regardless of its three-dimensional position, orientation, and lighting conditions. Such a problem is challenging because faces are nonrigid and have a high degree of variability in size, shape, color, and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics, and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for future research.
NIST Net: a Linux-based network emulation tool Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.
Compressive wireless sensing General Terms Compressive Sampling is an emerging theory that is based on the fact that a relatively small number of random pro-jections of a signal can contain most of its salient informa-tion. In this paper, we introduce the concept of Compressive Wireless Sensing for sensor networks in which a fusion center retrieves signal field information from an ensemble of spa-tially distributed sensor nodes. Energy and bandwidth are scarce resources in sensor networks and the relevant metrics of interest in our context are 1) the latency involved in in-formation retrieval; and 2) the associated power-distortion trade-o. It is generally recognized that given su cient prior knowledge about the sensed data (e. g., statistical character-ization, homogeneity etc. ), there exist schemes that have very favorable power-distortion-latency trade-o s. We pro-pose a distributed matched source-channel communication scheme, based in part on recent results in compressive sam-pling theory, for estimation of sensed data at the fusion cen-ter and analyze, as a function of number of sensor nodes, the trade-o s between power, distortion and latency. Compres-sive wireless sensing is a universal scheme in the sense that it requires no prior knowledge about the sensed data. This universality, however, comes at the cost of optimality (in terms of a less favorable power-distortion-latency trade-o ) and we quantify this cost relative to the case when su cient prior information about the sensed data is assumed.
Analysis of the domain mapping method for elliptic diffusion problems on random domains. In this article, we provide a rigorous analysis of the solution to elliptic diffusion problems on random domains. In particular, based on the decay of the Karhunen-Loève expansion of the domain perturbation field, we establish decay rates for the derivatives of the random solution that are independent of the stochastic dimension. For the implementation of a related approximation scheme, like quasi-Monte Carlo quadrature, stochastic collocation, etc., we propose parametric finite elements to compute the solution of the diffusion problem on each individual realization of the domain generated by the perturbation field. This simplifies the implementation and yields a non-intrusive approach. Having this machinery at hand, we can easily transfer it to stochastic interface problems. The theoretical findings are complemented by numerical examples for both, stochastic interface problems and boundary value problems on random domains.
Coding Algorithms for 3DTV—A Survey Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development o- f algorithms.
Random Alpha Pagerank We suggest a revision to the PageRank random surfer model that considers the influence of a population of random surfers on the PageRank vector. In the revised model, each member of the population has its own teleportation parameter chosen from a probability distribution, and consequently, the ranking vector is random. We propose three algorithms for computing the statistics of the random ranking vector based respectively on (i) random sampling, (ii) paths along the links of the underlying graph, and (iii) quadrature formulas. We find that the expectation of the random ranking vector produces similar rankings to its deterministic analogue, but the standard deviation gives uncorrelated information (under a Kendall-tau metric) with myriad potential uses. We examine applications of this model to web spam.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.066667
0.066667
0.066667
0.016667
0.005556
0
0
0
0
0
0
0
0
0
Multivariate quadrature on adaptive sparse grids In this paper, we study the potential of adaptive sparse grids for multivariate numerical quadrature in the moderate or high dimensional case, i.e. for a number of dimensions beyond three and up to several hundreds. There, conventional methods typically suffer from the curse of dimension or are unsatisfactory with respect to accuracy. Our sparse grid approach, based upon a direct higher order discretization on the sparse grid, overcomes this dilemma to some extent, and introduces additional flexibility with respect to both the order of the 1 D quadrature rule applied (in the sense of Smolyak's tensor product decomposition) and the placement of grid points. The presented algorithm is applied to some test problems and compared with other existing methods.
A Statistical Method for Fast and Accurate Capacitance Extraction in the Presence of Floating Dummy Fills Dummy fills are being extensively used to enhance CMP planarity. However presence of these fills can have a significant impact on the values of interconnect capacitances. Accurate capacitance extraction accounting for these dummies is CPU intensive and cumbersome. For one, there are typically hundreds to thousands of dummy fills in a small layout region, which stress the general purpose capacitance extractor. Second, since these dummy fills are not introduced by the designers, it is of no interest for them to see the capacitances to dummy fills in the extraction reports; they are interested in equivalent capacitances associated with signal power and ground nets. Hence extracting equivalent capacitances across nets of interest in the presence of large number of dummy fills is an important and challenging problem. We present a novel extension to the widely popular Monte-Carlo capacitance extraction technique. Our extension handles the dummy fills efficiently. We demonstrate the accuracy and scalability of our approach by two methods (i) classical and golden technique of finding equivalent interconnect capacitances by eliminating dummy fills through the network reduction method and (ii) comparing extracted capacitances with measurement data from a test chip.
Hierarchical computation of 3-D interconnect capacitance using direct boundary element method The idea of Appel's hierarchical algorithm handling the many-body problem is implemented in the direct boundary element method (BEM) for computation of 3D VLSI parasitic capacitance. Both the electric potential and normal electric field intensity on the boundary are involved, so it can be much easier to handle problems with multiple dielectrics and finite dielectric structure than the indirect BEM. Three kinds of boundaries (forced boundary, natural boundary and dielectric interface) are treated. Two integral kernels with different singularity (1/r, 1/r/sup 3/) are involved while computing the interaction between the boundary elements. These features make it significantly distinct from the hierarchical algorithm based on the indirect BEM, which only handles single dielectric, one integral kernel and one forced boundary. The coefficient matrix is generated and stored hierarchically in this paper. As a result, computation cost of the matrix is reduced, and the matrix-vector multiplication in the GMRES iteration is accelerated, so computation speed is improved significantly.
ARMS - automatic residue-minimization based sampling for multi-point modeling techniques This paper describes an automatic methodology for optimizing sample point selection for using in the framework of model order reduction (MOR). The procedure, based on the maximization of the dimension of the subspace spanned by the samples, iteratively selects new samples in an efficient and automatic fashion, without computing the new vectors and with no prior assumptions on the system behavior. The scheme is general, and valid for single and multiple dimensions, with applicability on rational nominal MOR approaches, and on multi-dimensional sampling based parametric MOR methodologies. The paper also presents an integrated algorithm for multi-point MOR, with automatic sample and order selection based on the transfer function error estimation. Results on a variety of industrial examples demonstrate the accuracy and robustness of the technique.
Cubature formulas for symmetric measures in higher dimensions with few points We study cubature formulas for d-dimensional integrals with an arbitrary symmetric weight function of product form. We present a construction that yields a high polynomial exactness: for. fixed degree l = 5 or l = 7 and large dimension d the number of knots is only slightly larger than the lower bound of Moller and much smaller compared to the known constructions. We also show, for any odd degree l = 2k + 1, that the minimal number of points is almost independent of the weight function. This is also true for the integration over the (Euclidean) sphere. u
A stochastic integral equation method for modeling the rough surface effect on interconnect capacitance In This work we describe a stochastic integral equation method for computing the mean value and the variance of capacitance of interconnects with random surface roughness. An ensemble average Green's function is combined with a matrix Neumann expansion to compute nominal capacitance and its variance. This method avoids the time-consuming Monte Carlo simulations and the discretization of rough surfaces. Numerical experiments show that the results of the new method agree very well with Monte Carlo simulation results.
Numerical Integration using Sparse Grids We present and review algorithms for the numerical integration of multivariatefunctions defined over d--dimensional cubes using several variantsof the sparse grid method first introduced by Smolyak [51]. In this approach,multivariate quadrature formulas are constructed using combinationsof tensor products of suited one--dimensional formulas. The computingcost is almost independent of the dimension of the problem if thefunction under consideration has bounded mixed derivatives. We suggest...
Fast Analysis of a Large-Scale Inductive Interconnect by Block-Structure-Preserved Macromodeling To efficiently analyze the large-scale interconnect dominant circuits with inductive couplings (mutual inductances), this paper introduces a new state matrix, called VNA, to stamp inverse-inductance elements by replacing inductive-branch current with flux. The state matrix under VNA is diagonal-dominant, sparse, and passive. To further explore the sparsity and hierarchy at the block level, a new matrix-stretching method is introduced to reorder coupled fluxes into a decoupled state matrix with a bordered block diagonal (BBD) structure. A corresponding block-structure-preserved model-order reduction, called BVOR, is developed to preserve the sparsity and hierarchy of the BBD matrix at the block level. This enables us to efficiently build and simulate the macromodel within a SPICE-like circuit simulator. Experiments show that our method achieves up to 7× faster modeling building time, up to 33× faster simulation time, and as much as 67× smaller waveform error compared to SAPOR [a second-order reduction based on nodal analysis (NA)] and PACT (a first-order 2×2 structured reduction based on modified NA).
Multilinear Analysis of Image Ensembles: TensorFaces Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. Multilinear algebra, the algebra of higher-order tensors, offers a potent mathematical framework for analyzing the multifactor structure of image ensembles and for addressing the difficult problem of disentangling the constituent factors or modes. Our multilinear modeling technique employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the N-mode SVD. As a concrete example, we consider the multilinear analysis of ensembles of facial images that combine several modes, including different facial geometries (people), expressions, head poses, and lighting conditions. Our resulting "TensorFaces" representation has several advantages over conventional eigenfaces. More generally, multilinear analysis shows promise as a unifying framework for a variety of computer vision problems.
Multi-output local Gaussian process regression: Applications to uncertainty quantification We develop an efficient, Bayesian Uncertainty Quantification framework using a novel treed Gaussian process model. The tree is adaptively constructed using information conveyed by the observed data about the length scales of the underlying process. On each leaf of the tree, we utilize Bayesian Experimental Design techniques in order to learn a multi-output Gaussian process. The constructed surrogate can provide analytical point estimates, as well as error bars, for the statistics of interest. We numerically demonstrate the effectiveness of the suggested framework in identifying discontinuities, local features and unimportant dimensions in the solution of stochastic differential equations.
Accurate and efficient gate-level parametric yield estimation considering correlated variations in leakage power and performance Increasing levels of process variation in current technologies have a major impact on power and performance, and result in parametric yield loss. In this work we develop an efficient gate-level approach to accurately estimate the parametric yield defined by leakage power and delay constraints, by finding the joint probability distribution function (jpdf) for delay and leakage power. We consider inter-die variations as well as intra-die variations with correlated and random components. The correlation between power and performance arise due to their dependence on common process parameters and is shown to have a significant impact on yield in high-frequency bins. We also propose a method to estimate parametric yield given the power/delay jpdf that is much faster than numerical integration with good accuracy. The proposed approach is implemented and compared with Monte Carlo simulations and shows high accuracy, with the yield estimates achieving an average error of 2%.
Using corpus statistics and WordNet relations for sense identification Corpus-based approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneck. We show how knowledge-based techniques can be used to open the bottleneck by automatically locating training corpora. We describe a statistical classifier that combines topical context with local cues to identify a word sense. The classifier is used to disambiguate a noun, a verb, and an adjective. A knowledge base in the form of WordNet's lexical relations is used to automatically locate training examples in a general text corpus. Test results are compared with those from manually tagged training examples.
Convex normal functions revisited The lattice L"u of upper semicontinuous convex normal functions with convolution ordering arises in studies of type-2 fuzzy sets. In 2002, Kawaguchi and Miyakoshi [Extended t-norms as logical connectives of fuzzy truth values, Multiple-Valued Logic 8(1) (2002) 53-69] showed that this lattice is a complete Heyting algebra. Later, Harding et al. [Lattices of convex, normal functions, Fuzzy Sets and Systems 159 (2008) 1061-1071] gave an improved description of this lattice and showed it was a continuous lattice in the sense of Gierz et al. [A Compendium of Continuous Lattices, Springer, Berlin, 1980]. In this note we show the lattice L"u is isomorphic to the lattice of decreasing functions from the real unit interval [0,1] to the interval [0,2] under pointwise ordering, modulo equivalence almost everywhere. This allows development of further properties of L"u. It is shown that L"u is completely distributive, is a compact Hausdorff topological lattice whose topology is induced by a metric, and is self-dual via a period two antiautomorphism. We also show the lattice L"u has another realization of natural interest in studies of type-2 fuzzy sets. It is isomorphic to a quotient of the lattice L of all convex normal functions under the convolution ordering. This quotient identifies two convex normal functions if they agree almost everywhere and their intervals of increase and decrease agree almost everywhere.
A statistical approach to the timing-yield optimization of pipeline circuits The continuous miniaturization of semiconductor devices imposes serious threats to design robustness against process variations and environmental fluctuations. Modern circuit designs may suffer from design uncertainties, unpredictable in the design phase or even after manufacturing. This paper presents an optimization technique to make pipeline circuits robust against delay variations and thus maximize timing yield. By trading larger flip-flops for smaller latches, the proposed approach can be used as a post-synthesis or post-layout optimization tool, allowing accurate timing information to be available. Experimental results show an average of 31% timing yield improvement for pipeline circuits. They suggest that our method is promising for high-speed designs and is capable of tolerating clock variations.
1.022501
0.023184
0.023184
0.011679
0.007902
0.002967
0.001
0.000247
0.000072
0.00002
0.000002
0
0
0
Asynchronous verifiable secret sharing and proactive cryptosystems Verifiable secret sharing is an important primitive in distributed cryptography. With the growing interest in the deployment of threshold cryptosystems in practice, the traditional assumption of a synchronous network has to be reconsidered and generalized to an asynchronous model. This paper proposes the first practical verifiable secret sharing protocol for asynchronous networks. The protocol creates a discrete logarithm-based sharing and uses only a quadratic number of messages in the number of participating servers. It yields the first asynchronous Byzantine agreement protocol in the standard model whose efficiency makes it suitable for use in practice. Proactive cryptosystems are another important application of verifiable secret sharing. The second part of this paper introduces proactive cryptosystems in asynchronous networks and presents an efficient protocol for refreshing the shares of a secret key for discrete logarithm-based sharings.
Distributed Key Generation for the Internet Although distributed key generation (DKG) has been studied for some time, it has never been examined outside of the synchronous setting. We present the first realistic DKG architecture for use over the Internet. We propose a practical system model and define an efficient verifiable secret sharing scheme in it. We observe the necessity of Byzantine agreement for asynchronous DKG and analyze the difficulty of using a randomized protocol for it. Using our verifiable secret sharing scheme and a leader-based agreement protocol, we then design a DKG protocol for public-key cryptography. Finally, along with traditional proactive security, we also introduce group modification primitives in our system.
Universally composable security: a new paradigm for cryptographic protocols We propose a novel paradigm for defining security of cryptographic protocols, called universally composable security. The salient property of universally composable definitions of security is that they guarantee security even when a secure protocol is composed of an arbitrary set of protocols, or more generally when the protocol is used as a component of an arbitrary system. This is an essential property for maintaining security of cryptographic protocols in complex and unpredictable environments such as the Internet. In particular, universally composable definitions guarantee security even when an unbounded number of protocol instances are executed concurrently in an adversarially controlled manner, they guarantee non-malleability with respect to arbitrary protocols, and more. We show how to formulate universally composable definitions of security for practically any cryptographic task. Furthermore, we demonstrate that practically any such definition can be realized using known techniques, as long as only a minority of the participants are corrupted. We then proceed to formulate universally composable definitions of a wide array of cryptographic tasks, including authenticated and secure communication, key-exchange, public-key encryption, signature, commitment, oblivious transfer, zero knowledge and more. We also make initial steps towards studying the realizability of the proposed definitions in various settings.
COCA: A secure distributed online certification authority COCA is a fault-tolerant and secure on-line certification authority that has been built and deployed both in a local area network and in the Internet. Replication is used to achieve availability; proactive recovery with threshold cryptography is used for digitally signing certificates in a way that defends against mobile adversaries which attack, compromise, and control one replica for a limited period of time before moving on to another. Relatively weak assumptions characterize environments in which COCA''s protocols will execute correctly. No assumption is made about execution speed and message delivery delays; channels are expected to exhibit only intermittent reliability; and with 3t+1 COCA servers up to t may be faulty or compromised. The result is a system with inherent defenses to certain denial of service attacks because, by their very nature, weak assumptions are difficult for attackers to invalidate. In addition, traditional techniques, including request authorization, resource management based on segregation and scheduling different classes of requests, as well as caching results of expensive cryptographic operations further reduce COCA''s vulnerability to denial of service attacks. Results from experiments in a local area network and the Internet allow a quantitative evaluation of the various means COCA employs to resist denial of service attacks.
Proactive secure message transmission in asynchronous networks We study the problem of secure message transmission among a group of parties in an insecure asynchronous network, where an adversary may repeatedly break into some parties for transient periods of time. A solution for this task is needed in order to use proactive cryptosystems in wide-area networks with loose synchronization. Parties have access to a secure hardware device that stores some cryptographic keys, but can carry out only a very limited set of operations. We provide a formal model of the system, using the framework for asynchronous reactive systems proposed by Pfitzmann and Waidner (Symposium on Security & Privacy, 2001), present a protocol for proactive message transmission, and prove it secure using the composability property of the framework.
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
MapReduce: simplified data processing on large clusters MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Analysis of the domain mapping method for elliptic diffusion problems on random domains. In this article, we provide a rigorous analysis of the solution to elliptic diffusion problems on random domains. In particular, based on the decay of the Karhunen-Loève expansion of the domain perturbation field, we establish decay rates for the derivatives of the random solution that are independent of the stochastic dimension. For the implementation of a related approximation scheme, like quasi-Monte Carlo quadrature, stochastic collocation, etc., we propose parametric finite elements to compute the solution of the diffusion problem on each individual realization of the domain generated by the perturbation field. This simplifies the implementation and yields a non-intrusive approach. Having this machinery at hand, we can easily transfer it to stochastic interface problems. The theoretical findings are complemented by numerical examples for both, stochastic interface problems and boundary value problems on random domains.
Sensor Selection via Convex Optimization We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the (m k) possible choices of sensor measurements is not practical unless m and k are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m 3 operations; for m= 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer.
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the $k$ dominant components of the singular value decomposition of an $m \times n$ matrix. (i) For a dense input matrix, randomized algorithms require $\bigO(mn \log(k))$ floating-point operations (flops) in contrast to $ \bigO(mnk)$ for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to $\bigO(k)$ passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
Using trapezoids for representing granular objects: Applications to learning and OWA aggregation We discuss the role and benefits of using trapezoidal representations of granular information. We focus on the use of level sets as a tool for implementing many operations on trapezoidal sets. We point out the simplification that the linearity of the trapezoid brings by requiring us to perform operations on only two level sets. We investigate the classic learning algorithm in the case when our observations are granule objects represented as trapezoidal fuzzy sets. An important issue that arises is the adverse effect that very uncertain observations have on the quality of our estimates. We suggest an approach to addressing this problem using the specificity of the observations to control its effect. We next consider the OWA aggregation of information represented as trapezoids. An important problem that arises here is the ordering of the trapezoidal fuzzy sets needed for the OWA aggregation. We consider three approaches to accomplish this ordering based on the location, specificity and fuzziness of the trapezoids. From these three different approaches three fundamental methods of ordering are developed. One based on the mean of the 0.5 level sets, another based on the length of the 0.5 level sets and a third based on the difference in lengths of the core and support level sets. Throughout this work particular emphasis is placed on the simplicity of working with trapezoids while still retaining a rich representational capability.
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.05393
0.063537
0.052184
0.052184
0.013843
0
0
0
0
0
0
0
0
0
The Scientific Community Metaphor Scientific communities have proven to be extremely successful at solving problems. They are inherently parallel systems and their macroscopic nature makes them amenable to careful study. In this paper the character of scientific research is examined drawing on sources in the philosophy and history of science. We maintain that the success of scientific research depends critically on its concurrency and pluralism. A variant of the language Ether is developed that embodies notions of concurrency necessary to emulate some of the problem solving behavior of scientific communities. Capabilities of scientific communities are discussed in parallel with simplified models of these capabilities in this language.
On Agent-Mediated Electronic Commerce This paper surveys and analyzes the state of the art of agent-mediated electronic commerce (e-commerce), concentrating particularly on the business-to-consumer (B2C) and business-to-business (B2B) aspects. From the consumer buying behavior perspective, agents are being used in the following activities: need identification, product brokering, buyer coalition formation, merchant brokering, and negotiation. The roles of agents in B2B e-commerce are discussed through the business-to-business transaction model that identifies agents as being employed in partnership formation, brokering, and negotiation. Having identified the roles for agents in B2C and B2B e-commerce, some of the key underpinning technologies of this vision are highlighted. Finally, we conclude by discussing the future directions and potential impediments to the wide-scale adoption of agent-mediated e-commerce.
Decision station: a notion for a situated DSS Despite the growing need for decision support in the digital age, there has not been an adequate increase of interest in research and development in Decision Support Systems (DSSs). In our view, the vision for a new type of DSS should provision a tighter integration with the problem domain and include implementation phase in addition to the traditional intelligence, design, and choice phases. We argue that an adequate DSS in our dynamic electronic era should be situated in the problem environment. We propose a generic architecture for such a DSS incorporating sensors, effectors, and enhanced interfaces in addition to the traditional DSS kernel. We suggest the term "Decision Station" to refer to such situated DSS. We further elaborate on the possibilities to implement situated DSS in different segments of e-business. We argue in favor of using intelligent agents as the basis of new type of DSS. We further propose an architecture and describe a prototype for such DSS.
Case-based decision support
Implications of buyer decision theory for design of e-commerce websites In the rush to open their website, e-commerce sites too often fail to support buyer decision making and search, resulting in a loss of sale and the customer's repeat business. This paper reviews why this occurs and the failure of many B2C and B2B website executives to understand that appropriate decision support and search technology can't be fully bought off-the-shelf. Our contention is that significant investment and effort is required at any given website in order to create the decision support and search agents needed to properly support buyer decision making. We provide a framework to guide such effort (derived from buyer behavior choice theory); review the open problems that e-catalog sites pose to the framework and to existing search engine technology; discuss underlying design principles and guidelines; validate the framework and guidelines with a case study; and discuss lessons learned and steps needed to better support buyer decision behavior in the future. Future needs are also pinpointed.
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
Fuzzy logic systems for engineering: a tutorial A fuzzy logic system (FLS) is unique in that it is able to simultaneously handle numerical data and linguistic knowledge. It is a nonlinear mapping of an input data (feature) vector into a scalar output, i.e., it maps numbers into numbers. Fuzzy set theory and fuzzy logic establish the specifics of the nonlinear mapping. This tutorial paper provides a guided tour through those aspects of fuzzy sets and fuzzy logic that are necessary to synthesize an FLS. It does this by starting with crisp set theory and dual logic and demonstrating how both can be extended to their fuzzy counterparts. Because engineering systems are, for the most part, causal, we impose causality as a constraint on the development of the FLS. After synthesizing a FLS, we demonstrate that it can be expressed mathematically as a linear combination of fuzzy basis functions, and is a nonlinear universal function approximator, a property that it shares with feedforward neural networks. The fuzzy basis function expansion is very powerful because its basis functions can be derived from either numerical data or linguistic knowledge, both of which can be cast into the forms of IF-THEN rules
Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.
Coding Algorithms for 3DTV—A Survey Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development o- f algorithms.
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an $n$-dimensional vector “decouples” as $n$ scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
Preferences and their application in evolutionary multiobjective optimization The paper describes a new preference method and its use in multiobjective optimization. These preferences are developed with a goal to reduce the cognitive overload associated with the relative importance of a certain criterion within a multiobjective design environment involving large numbers of objectives. Their successful integration with several genetic-algorithm-based design search and optimi...
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
DIRECT Mode Early Decision Optimization Based on Rate Distortion Cost Property and Inter-view Correlation. In this paper, an Efficient DIRECT Mode Early Decision (EDMED) algorithm is proposed for low complexity multiview video coding. Two phases are included in the proposed EDMED: 1) early decision of DIRECT mode is made before doing time-consuming motion estimation/disparity estimation, where adaptive rate-distortion (RD) cost threshold, inter-view DIRECT mode correlation and coded block pattern are jointly utilized; and 2) false rejected DIRECT mode macroblocks of the first phase are then successfully terminated based on weighted RD cost comparison between 16×16 and DIRECT modes for further complexity reduction. Experimental results show that the proposed EDMED algorithm achieves 11.76% more complexity reduction than that achieved by the state-of-the-art SDMET for the temporal views. Also, it achieves a reduction of 50.98% to 81.13% (69.15% on average) in encoding time for inter-view, which is 29.31% and 15.03% more than the encoding time reduction achieved by the state-of-the-art schemes. Meanwhile, the average Peak Signal-to-Noise Ratio (PSNR) degrades 0.05 dB and average bit rate increases by -0.37%, which is negligible. © 1963-12012 IEEE.
Adaptive mode decision for multiview video coding based on macroblock position constraint model. Multiview video coding (MVC) exploits mode decision, motion estimation and disparity estimation to achieve high compression ratio, which results in an extensive computational complexity. This paper presents an efficient mode decision approach for MVC using a macroblock (MB) position constraint model (MPCM). The proposed approach reduces the number of candidate modes by utilizing the mode correlation and rate distortion cost (RD cost) in the previously encoded frames/views. Specifically, the mode correlations both in the temporal-spatial domain and the inter-view are modeled with MPCM. Then, MPCM is exploited to select the optimal prediction direction for the current encoding MB. Finally, the inter mode is early determined in the optimal prediction direction. Experimental results show that the proposed method can save 86.03 % of encoding time compared with the exhaustive mode decision used in the reference software of joint multiview video coding, with only 0.077 dB loss in Bjontegaard delta peak signal-to-noise ratio (BDPSNR) and 2.29 % increment of the total Bjontegaard delta bit rate (BDBR), which is superior to the performances of state-of-the-art approaches.
Early DIRECT Mode Decision for MVC Using MB Mode Homogeneity and RD Cost Correlation. Multi-view video coding (MVC) adopts variable size mode decision to achieve high coding efficiency. However, its high computational complexity is a bottleneck of enabling MVC into practical real-time applications. In this paper, an early termination strategy is proposed for DIRECT mode decision of MVC by exploiting mode homogeneity and rate distortion (RD) cost correlation. By comparing the RD cost between DIRECT mode and Inter$16\times 16$ mode, an adaptive threshold is defined based on the MB's mode homogeneity and RD cost so as to early terminate the remaining inter and intra modes. Experimental results show that compared with the original JMVC model, the proposed approach can reduce the total encoding time from 65.08% to 91.45% (80.43% on average). Meanwhile, the Bjontegaard delta peak signal-to-noise ratio only decreases 0.031 dB and Bjontegaard delta bit rate increases 0.97% on average, which is a negligible loss of coding efficiency and superior to the performance of state-of-the-art methods.
Fast depth map mode decision based on depth-texture correlation and edge classification for 3D-HEVC. A fast depth map mode decision algorithm for 3D-HEVC is proposed.The depth map-texture video correlation is exploited throughout fast mode detection.The edge classification is employed in the procedure of intra/inter prediction.Experimental results demonstrate the effectiveness of the proposed algorithm. The 3D extension of High Efficiency Video Coding (3D-HEVC) has been adopted as the emerging 3D video coding standard to support the multi-view video plus depth map (MVD) compression. In the joint model of 3D-HEVC design, the exhaustive mode decision is required to be checked all the possible prediction modes and coding levels to find the one with least rate distortion cost in depth map coding. Furthermore, new coding tools (such as depth-modeling mode (DMM) and segment-wise depth coding (SDC)) are exploited for the characteristics of depth map to improve the coding efficiency. These achieve the highest possible coding efficiency to code depth map, but also bring a significant computational complexity which limits 3D-HEVC from real-time applications. In this paper, we propose a fast depth map mode decision algorithm for 3D-HEVC by jointly using the correlation of depth map-texture video and the edge information of depth map. Since the depth map and texture video represent the same scene at the same time instant (they have the same motion characteristics), it is not efficient to use all the prediction modes and coding levels in depth map coding. Therefore, we can skip some specific prediction modes and depth coding levels rarely used in corresponding texture video. Meanwhile, the depth map is mainly characterized by sharp object edges and large areas of nearly constant regions. By fully exploiting these characteristics, we can skip some prediction modes which are rarely used in homogeneity regions based on the edge classification. Experimental results show that the proposed algorithm achieves considerable encoding time saving while maintaining almost the same rate-distortion (RD) performance as the original 3D-HEVC encoder.
3D-TV Content Storage and Transmission. There exist a variety of ways to represent 3D content, including stereo and multiview video, as well as frame-compatible and depth-based video formats. There are also a number of compression architectures and techniques that have been introduced in recent years. This paper provides an overview of relevant 3D representation and compression formats. It also analyzes some of the merits and drawbacks ...
Motion Hooks for the Multiview Extension of HEVC MV-HEVC refers to the multiview extension of High Efficiency Video Coding (HEVC). At the time of writing, MV-HEVC was being developed by the Joint Collaborative Team on 3D Video Coding Extension Development (JCT-3V) of International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) Moving Picture Experts Group and ITU-T VCEG. Before HEVC itself was technically finalized in January 2013, the development of MV-HEVC had already started and it was decided that MV-HEVC would only contain high-level syntax changes compared with HEVC, i.e., no changes to block-level processes, to enable the reuse of the first-generation HEVC decoder hardware as is for constructing an MV-HEVC decoder with only firmware changes corresponding to the high-level syntax part of the codec. Consequently, any block-level process that is not necessary for HEVC itself but on the other hand is useful for MV-HEVC can only be enabled through so-called hooks. Motion hooks refer to techniques that do not have a significant impact on the HEVC single-view version 1 codec and can mainly improve MV-HEVC. This paper presents techniques for efficient MV-HEVC coding by introducing hooks into the HEVC design to accommodate inter-view prediction in MV-HEVC. These hooks relate to motion prediction, hence named motion hooks. Some of the motion hooks developed by the authors have been adopted into HEVC during its finalization. Simulation results show that the proposed motion hooks provide on average 4% of bitrate reduction for the views coded with inter-view prediction.
Depth Map Coding With Distortion Estimation Of Rendered View New data formats that include both video and the corresponding depth maps, such as multiview plus depth (MVD), enable new video applications in which intermediate video views (virtual views) can be generated using the transmitted/stored video views (reference views) and the corresponding depth maps as inputs. We propose a depth map coding method based on a new distortion measurement by deriving relationships between distortions in coded depth map and rendered view. In our experiments we use a codec based on H.264/AVC tools, where the rate-distortion (RD) optimization for depth encoding makes use of the new distortion metric. Our experimental results show the efficiency of the proposed method, with coding gains of up to 1.6 dB in interpolated frame quality as compared to encoding the depth maps using the same coding tools but applying RD optimization based on conventional distortion metrics.
Sets with type-2 operations The algebra of truth values of type-2 fuzzy sets consists of all mappings of the unit interval to itself, with type-2 operations that are convolutions of ordinary max and min operations. This paper is concerned with a special subalgebra of this truth value algebra, namely the set of nonzero functions with values in the two-element set {0,1}. This algebra can be identified with the set of all non-empty subsets of the unit interval, but the operations are not the usual union and intersection. We give simplified descriptions of the operations and derive the basic algebraic properties of this algebra, including the identification of its automorphism group. We also discuss some subalgebras and homomorphisms between them and look briefly at t-norms on this algebra of sets.
Compressive Sampling and Lossy Compression Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar ...
A model of consensus in group decision making under linguistic assessments This paper presents a consensus model in group decision making under linguistic assessments. It is based on the use of linguistic preferences to provide individuals' opinions, and on the use of fuzzy majority of consensus, represented by means of a linguistic quantifier. Several linguistic consensus degrees and linguistic distances are defined, acting on three levels. The consensus degrees indicate how far a group of individuals is from the maximum consensus, and linguistic distances indicate how far each individual is from current consensus labels over the preferences. This consensus model allows to incorporate more human consistency in decision support systems.
On the position of intuitionistic fuzzy set theory in the framework of theories modelling imprecision Intuitionistic fuzzy sets [K.T. Atanassov, Intuitionistic fuzzy sets, VII ITKR's Session, Sofia (deposed in Central Science-Technical Library of Bulgarian Academy of Science, 1697/84), 1983 (in Bulgarian)] are an extension of fuzzy set theory in which not only a membership degree is given, but also a non-membership degree, which is more or less independent. Considering the increasing interest in intuitionistic fuzzy sets, it is useful to determine the position of intuitionistic fuzzy set theory in the framework of the different theories modelling imprecision. In this paper we discuss the mathematical relationship between intuitionistic fuzzy sets and other models of imprecision.
R-POPTVR: a novel reinforcement-based POPTVR fuzzy neural network for pattern classification. In general, a fuzzy neural network (FNN) is characterized by its learning algorithm and its linguistic knowledge representation. However, it does not necessarily interact with its environment when the training data is assumed to be an accurate description of the environment under consideration. In interactive problems, it would be more appropriate for an agent to learn from its own experience through interactions with the environment, i.e., reinforcement learning. In this paper, three clustering algorithms are developed based on the reinforcement learning paradigm. This allows a more accurate description of the clusters as the clustering process is influenced by the reinforcement signal. They are the REINFORCE clustering technique I (RCT-I), the REINFORCE clustering technique II (RCT-II), and the episodic REINFORCE clustering technique (ERCT). The integrations of the RCT-I, the RCT-II, and the ERCT within the pseudo-outer product truth value restriction (POPTVR), which is a fuzzy neural network integrated with the truth restriction value (TVR) inference scheme in its five layered feedforward neural network, form the RPOPTVR-I, the RPOPTVR-II, and the ERPOPTVR, respectively. The Iris, Phoneme, and Spiral data sets are used for benchmarking. For both Iris and Phoneme data, the RPOPTVR is able to yield better classification results which are higher than the original POPTVR and the modified POPTVR over the three test trials. For the Spiral data set, the RPOPTVR-II is able to outperform the others by at least a margin of 5.8% over multiple test trials. The three reinforcement-based clustering techniques applied to the POPTVR network are able to exhibit the trial-and-error search characteristic that yields higher qualitative performance.
Compressed sensing of astronomical images: orthogonal wavelets domains A simple approach for orthogonal wavelets in compressed sensing (CS) applications is presented. We compare efficient algorithm for different orthogonal wavelet measurement matrices in CS for image processing from scanned photographic plates (SPP). Some important characteristics were obtained for astronomical image processing of SPP. The best orthogonal wavelet choice for measurement matrix construction in CS for image compression of images of SPP is given. The image quality measure for linear and nonlinear image compression method is defined.
Fuzzy OWA model for information security risk management One of the methods for information security risk assessment is the substantiated choice and realization of countermeasures against threats. A situational fuzzy OWA model of a multicriteria decision making problem concerning the choice of countermeasures for reducing information security risks is proposed. The proposed model makes it possible to modify the associated weights of criteria based on the information entropy with respect to the aggregation situation. The advantage of the model is the continuous improvement of the weights of the criteria and the aggregation of experts’ opinions depending on the parameter characterizing the aggregation situation.
1.078
0.076667
0.066667
0.016667
0.003582
0.000219
0.000072
0
0
0
0
0
0
0
From Personal Area Networks to Personal Networks: A User Oriented Approach This paper introduces Personal Networks (PN), a new concept related to the emerging field of pervasive computing that extends the concept of a Personal Area Network (PAN). The latter refers to a space of small coverage (less than 10 m) around a person where ad-hoc communication occurs, typically between portable and mobile computing devices such as laptops, Personal Digital Assistants, cell phones, headsets and digital gadgets. We envision a PN to have a core consisting of a PAN, which is extended on-demand and in an ad-hoc fashion with personal resources or resources belonging to others. This extension will physically be made via infrastructure networks, e.g., the Internet, an organisation's intranet, or a PAN belonging to another person, a vehicle area network, or a home network. The PN is configured to support the application and takes into account context- and location information. The resources, which can become part of a PN, will be very diverse. These resources can be private or may have to be shared with other people. They may be free or one may have to pay for their usage. They can be physically close or far away. In this paper, we discuss a number of challenging research problems and potential directions for solutions. Specifically we address the architecture of PNs, techniques for resource and environment discovery, self-organisation, routing, co-operation with fixed infrastructures, and security and accounting.
Mining Sequential Patterns of Event Streams in a Smart Home Application.
Incremental Temporal Pattern Mining Using Efficient Batch-Free Stream Clustering. This paper address the problem of temporal pattern mining from multiple data streams containing temporal events. Temporal events are considered as real world events aligned with comprehensive starting and ending timing information rather than simple integer timestamps. Predefined relations, such as \"before\" and \"after\", describe the heterogeneous relationships hidden in temporal data with limited diversity. In this work, the relationships among events are learned dynamically from the temporal information. Each event is treated as an object with a label and numerical attributes. An online-offline model is used as the primary structure for analyzing the evolving multiple streams. Different distance functions on temporal events and sequences can be applied depending on the application scenario. A prefix tree is introduced for a fast incremental pattern update. Events in the real world usually persist for some period. It is more natural to model events as intervals with temporal information rather than as points on the timeline. Based on the representation proposed in this work, our approach can also be extended to handle interval data. Experiments show how the method, with richer information and more accurate results than the state-of-the-art, processes both point-based and interval-based event streams efficiently.
Improving Bandwidth Utilization of Intermittent Links in Highly Dynamic Ad Hoc Networks. Non-uniform node densities occur and intermittent links exist in highly dynamic ad hoc networks. To fit these networks, researchers usually combine delay tolerant network (DTN) routing protocols and mobile ad hoc network (MANET) routing protocols. The DTN protocol separates end-to-end links into multiple DTN links, which consist of multi-hop MANET links. Determining how to arrange DTN links and MANET links from source to end and dealing with intermittent links are performance issues, because node density ranges from sparse to dense and MANET protocols are much lighter than DTN protocols. This paper presents HMDTN, an application-network cross-layer framework, to solve the previously mentioned issues. The application layer in HMDTN supports disrupt tolerance with a large data buffer while adjusting the routing table on the basis of the connection state of links (link is disrupted or recovered), which are collected by the network layer. As a result, HMDTN increases the bandwidth utilization of intermittent links without compromising the efficiency of the MANET protocol in a reliable network. The HMDTN prototype was implemented based on Bytewalla (a Java version of DTN2) and Netfilter-based AODV. Experiments on Android devices show that unlike AODV and Epidemic, HMDTN increases the bandwidth utilization of intermittent links with a negligible increase of network overhead. In particular, HMDTN maintains the network throughput as high as regular network conditions even if the network undergoes relatively long-term (dozens of seconds or few minutes) data link disruptions.
A Testbed for Evaluating Video Streaming Services in LTE. With the deployment of the first commercial long term evolution (LTE) networks, mobile operators need to understand how quality of service (QoS) network indicators and codec parameters affect subjective quality in video streaming services as perceived by customers. In this paper, the development of a testbed for evaluating the quality of experience (QoE) of 3D video streaming service over LTE is described. The proposed system consists of three elements: a streaming server, an internet protocol-level mobile network emulator, based on NetEm tool, and a streaming client. The main contribution of this testbed is the modification of NetEm code to model the impact of time correlation between packet arrivals on the packet delay in a video stream. In the testbed, different network conditions are configured by setting network emulator parameters based on the results obtained by a system-level LTE simulator. Results show how average network load and user position inside a cell have a strong impact on the QoS and QoE perceived by the end video user.
On the Optimal Presentation Duration for Subjective Video Quality Assessment Subjective quality assessment is an essential component of modern image and video processing, both for the validation of objective metrics and for the comparison of coding methods. However, the standard procedures used to collect data can be prohibitively time-consuming. One way of increasing the efficiency of data collection is to reduce the duration of test sequences from the 10 second length currently used in most subjective video quality assessment experiments. Here, we explore the impact of reducing sequence length upon perceptual accuracy when identifying compression artefacts. A group of four reference sequences, together with five levels of distortion, are used to compare the subjective ratings of viewers watching videos between 1.5 and 10 seconds long. We identify a smooth function indicating that accuracy increases linearly as the length of the sequences increases from 1.5 seconds to 7 seconds. The accuracy of observers viewing 1.5 second sequences was significantly inferior to those viewing sequences of 5 seconds, 7 seconds and 10 seconds. We argue that sequences between 5 seconds and 10 seconds produce satisfactory levels of accuracy but the practical benefits of acquiring more data lead us to recommend the use of 5 second sequences for future video quality assessment studies that use the DSCQS methodology.
Is QoE estimation based on QoS parameters sufficient for video quality assessment? Internet Service providers offer today a variety of of audio, video and data services. Traditional approaches for quality assessment of video services were based on Quality of Service (QoS) measurement. These measurements are considered as performance measurement at the network level. However, in order to make accurate quality assessment, the video must be assessed subjectively by the user. However, QoS parameters are easier to be obtained than the QoE subjective scores. Therefore, some recent works have investigated objective approaches to estimate QoE scores based on measured QoS parameters. The main purpose is the control of QoE based on QoS measurements. This paper presents several solutions and models presented in the literature. We discuss some other factors that must be considered in the mapping process between QoS and QoE. The impact of these factors on perceived QoE is verified through subjective tests.
Bio-Inspired Multi-User Beamforming For Qoe Provisioning In Cognitive Radio Networks In cognitive radio network (CRN), secondary users (SU) can share the licensed spectrum with the primary users (PU). Compared with the traditional network, spectrum utilization in CRN will be greatly improved. Meanwhile, in addition to considering the objective QoS metrics during the assessment of network performance, many subjective factors should not be ignored, such as service satisfaction and user experience. So the quality of experience (QoE) can reflect the performance of network more comprehensive than QoS. In this paper, we studied a multi-user beamforming problem in CRN and designed a QoE provisioning model based on a specific QoS-QoE mapping scheme with a comprehensive consideration of techniques in physical layer and the key indicator of performance assessment. The bio-inspired algorithm was utilized to solve the beamforming optimization problem. The simulation results showed that better service satisfaction and higher energy efficiency were gained with the objective of QoE than traditional QoS indicators.
Proposed framework for evaluating quality of experience in a mobile, testbed-oriented living lab setting The framework presented in this paper enables the evaluation of Quality of Experience (QoE) in a mobile, testbed-oriented Living Lab setting. As a result, it fits within the shift towards more user-centric approaches in innovation research and aims to bridge the gap between technical parameters and human experience factors. In view of this, Quality of Experience is seen as a multi-dimensional concept, which should be considered from an interdisciplinary perspective. Although several approaches for evaluating perceived QoE have been proposed in the past, they tend to focus on a limited number of objective dimensions and fail to grasp the subjective counterparts of users' experiences. We therefore propose a distributed architecture for monitoring network Quality of Service (QoS), context information and subjective user experience based on the functional requirements related to real-time experience measurements in real-life settings. This approach allows us to evaluate all relevant QoE-dimensions in a mobile context.
3-D Video Representation Using Depth Maps Current 3-D video (3DV) technology is based on stereo systems. These systems use stereo video coding for pictures delivered by two input cameras. Typically, such stereo systems only reproduce these two camera views at the receiver and stereoscopic displays for multiple viewers require wearing special 3-D glasses. On the other hand, emerging autostereoscopic multiview displays emit a large numbers of views to enable 3-D viewing for multiple users without requiring 3-D glasses. For representing a large number of views, a multiview extension of stereo video coding is used, typically requiring a bit rate that is proportional to the number of views. However, since the quality improvement of multiview displays will be governed by an increase of emitted views, a format is needed that allows the generation of arbitrary numbers of views with the transmission bit rate being constant. Such a format is the combination of video signals and associated depth maps. The depth maps provide disparities associated with every sample of the video signal that can be used to render arbitrary numbers of additional views via view synthesis. This paper describes efficient coding methods for video and depth data. For the generation of views, synthesis methods are presented, which mitigate errors from depth estimation and coding.
Aggregation Using the Linguistic Weighted Average and Interval Type-2 Fuzzy Sets The focus of this paper is the linguistic weighted average (LWA), where the weights are always words modeled as interval type-2 fuzzy sets (IT2 FSs), and the attributes may also (but do not have to) be words modeled as IT2 FSs; consequently, the output of the LWA is an IT2 FS. The LWA can be viewed as a generalization of the fuzzy weighted average (FWA) where the type-1 fuzzy inputs are replaced by IT2 FSs. This paper presents the theory, algorithms, and an application of the LWA. It is shown that finding the LWA can be decomposed into finding two FWAs. Since the LWA can model more uncertainties, it should have wide applications in distributed and hierarchical decision-making.
Accelerated iterative hard thresholding The iterative hard thresholding algorithm (IHT) is a powerful and versatile algorithm for compressed sensing and other sparse inverse problems. The standard IHT implementation faces several challenges when applied to practical problems. The step-size and sparsity parameters have to be chosen appropriately and, as IHT is based on a gradient descend strategy, convergence is only linear. Whilst the choice of the step-size can be done adaptively as suggested previously, this letter studies the use of acceleration methods to improve convergence speed. Based on recent suggestions in the literature, we show that a host of acceleration methods are also applicable to IHT. Importantly, we show that these modifications not only significantly increase the observed speed of the method, but also satisfy the same strong performance guarantees enjoyed by the original IHT method.
A New Methodology for Interconnect Parasitics Extraction Considering Photo-Lithography Effects Even with the wide adaptation of resolution enhancement techniques in sub-wavelength lithography, the geometry of the fabricated interconnect is still quite different from the drawn one. Existing Layout Parasitic Extraction (LPE) tools assume perfect geometry, thus introducing significant error in the extracted parasitic models, which in turn cases significant error in timing verification and signal integrity analysis. Our simulation shows that the RC parasitics extracted from perfect GDS-II geometry can be as much as 20% different from those extracted from the post litho/etching simulation geometry. This paper presents a new LPE methodology and related fast algorithms for interconnect parasitic extraction under photo-lithographic effects. Our methodology is compatible with the existing design flow. Experimental results show that the proposed methods are accurate and efficient.
Thermal switching error versus delay tradeoffs in clocked QCA circuits The quantum-dot cellular automata (QCA) model offers a novel nano-domain computing architecture by mapping the intended logic onto the lowest energy configuration of a collection of QCA cells, each with two possible ground states. A four-phased clocking scheme has been suggested to keep the computations at the ground state throughout the circuit. This clocking scheme, however, induces latency or delay in the transmission of information from input to output. In this paper, we study the interplay of computing error behavior with delay or latency of computation induced by the clocking scheme. Computing errors in QCA circuits can arise due to the failure of the clocking scheme to switch portions of the circuit to the ground state with change in input. Some of these non-ground states will result in output errors and some will not. The larger the size of each clocking zone, i.e., the greater the number of cells in each zone, the more the probability of computing errors. However, larger clocking zones imply faster propagation of information from input to output, i.e., reduced delay. Current QCA simulators compute just the ground state configuration of a QCA arrangement. In this paper, we offer an efficient method to compute the N-lowest energy modes of a clocked QCA circuit. We model the QCA cell arrangement in each zone using a graph-based probabilistic model, which is then transformed into a Markov tree structure defined over subsets of QCA cells. This tree structure allows us to compute the N-lowest energy configurations in an efficient manner by local message passing. We analyze the complexity of the model and show it to be polynomial in terms of the number of cells, assuming a finite neighborhood of influence for each QCA cell, which is usually the case. The overall low-energy spectrum of multiple clocking zones is constructed by concatenating the low-energy spectra of the individual clocking zones. We demonstrate how the model can be used to study the tradeoff betwee- - n switching errors and clocking zones.
1.100659
0.1
0.1
0.1
0.033333
0.00037
0.000196
0.00008
0.000017
0
0
0
0
0
Optimized Cooperative and Random Schedulings Packet Transmissions and Comparison of Their Parameters. Optimized cooperative scheduling (OCS) increases the network capacity of the wireless ad hoc network by optimizing relay node selection. This increases the capacity by dividing the long link into too many hops locally and avoids the node failure. OCS decides the best node for the transfer of the file by evaluating its objective function and forming the interference set of the relay node. Random scheduling generalizes the randomization framework to the Signal to Interference plus noise ratio rate-based interference model by dealing with the power allocation problem. It develops a distributed gossip comparison mechanism with the power allocation to maximize the throughput. The comparison of wireless scheduling schemes is done in terms of transmission rate, throughput, jitter, time to schedule packets, latency, end-to-end delay and bandwidth. The performance analysis proves that OCS has high transmission rate, low latency and uses less bandwidth. Random scheduling has high throughput, less time for scheduling of packets, low jitter and smaller at the end to end delay.
Mining Sequential Patterns of Event Streams in a Smart Home Application.
Incremental Temporal Pattern Mining Using Efficient Batch-Free Stream Clustering. This paper address the problem of temporal pattern mining from multiple data streams containing temporal events. Temporal events are considered as real world events aligned with comprehensive starting and ending timing information rather than simple integer timestamps. Predefined relations, such as \"before\" and \"after\", describe the heterogeneous relationships hidden in temporal data with limited diversity. In this work, the relationships among events are learned dynamically from the temporal information. Each event is treated as an object with a label and numerical attributes. An online-offline model is used as the primary structure for analyzing the evolving multiple streams. Different distance functions on temporal events and sequences can be applied depending on the application scenario. A prefix tree is introduced for a fast incremental pattern update. Events in the real world usually persist for some period. It is more natural to model events as intervals with temporal information rather than as points on the timeline. Based on the representation proposed in this work, our approach can also be extended to handle interval data. Experiments show how the method, with richer information and more accurate results than the state-of-the-art, processes both point-based and interval-based event streams efficiently.
Improving Bandwidth Utilization of Intermittent Links in Highly Dynamic Ad Hoc Networks. Non-uniform node densities occur and intermittent links exist in highly dynamic ad hoc networks. To fit these networks, researchers usually combine delay tolerant network (DTN) routing protocols and mobile ad hoc network (MANET) routing protocols. The DTN protocol separates end-to-end links into multiple DTN links, which consist of multi-hop MANET links. Determining how to arrange DTN links and MANET links from source to end and dealing with intermittent links are performance issues, because node density ranges from sparse to dense and MANET protocols are much lighter than DTN protocols. This paper presents HMDTN, an application-network cross-layer framework, to solve the previously mentioned issues. The application layer in HMDTN supports disrupt tolerance with a large data buffer while adjusting the routing table on the basis of the connection state of links (link is disrupted or recovered), which are collected by the network layer. As a result, HMDTN increases the bandwidth utilization of intermittent links without compromising the efficiency of the MANET protocol in a reliable network. The HMDTN prototype was implemented based on Bytewalla (a Java version of DTN2) and Netfilter-based AODV. Experiments on Android devices show that unlike AODV and Epidemic, HMDTN increases the bandwidth utilization of intermittent links with a negligible increase of network overhead. In particular, HMDTN maintains the network throughput as high as regular network conditions even if the network undergoes relatively long-term (dozens of seconds or few minutes) data link disruptions.
A Testbed for Evaluating Video Streaming Services in LTE. With the deployment of the first commercial long term evolution (LTE) networks, mobile operators need to understand how quality of service (QoS) network indicators and codec parameters affect subjective quality in video streaming services as perceived by customers. In this paper, the development of a testbed for evaluating the quality of experience (QoE) of 3D video streaming service over LTE is described. The proposed system consists of three elements: a streaming server, an internet protocol-level mobile network emulator, based on NetEm tool, and a streaming client. The main contribution of this testbed is the modification of NetEm code to model the impact of time correlation between packet arrivals on the packet delay in a video stream. In the testbed, different network conditions are configured by setting network emulator parameters based on the results obtained by a system-level LTE simulator. Results show how average network load and user position inside a cell have a strong impact on the QoS and QoE perceived by the end video user.
First-order incremental block-based statistical timing analysis Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.
Estimators and tail bounds for dimension reduction in lα (0 < α ≤ 2) using stable random projections. The method of stable random projections is popular in data stream computations, data mining, information retrieval, and machine learning, for efficiently computing the lα (0 < α ≤ 2) distances using a small (memory) space, in one pass of the data. We propose algorithms based on (1) the geometric mean estimator, for all 0 <α ≤ 2, and (2) the harmonic mean estimator, only for small α (e.g., α < 0.344). Compared with the previous classical work [27], our main contributions include: • The general sample complexity bound for α ≠ 1,2. For α = 1, [27] provided a nice argument based on the inverse of Cauchy density about the median, leading to a sample complexity bound, although they did not provide the constants and their proof restricted ε to be "small enough." For general α ≠ 1, 2, however, the task becomes much more difficult. [27] provided the "conceptual promise" that the sample complexity bound similar to that for α = 1 should exist for general α, if a "non-uniform algorithm based on t-quantile" could be implemented. Such a conceptual algorithm was only for supporting the arguments in [27], not a real implementation. We consider this is one of the main problems left open in [27]. In this study, we propose a practical algorithm based on the geometric mean estimator and derive the sample complexity bound for all 0 < α ≤ 2. • The practical and optimal algorithm for α = 0+ The l0 norm is an important case. Stable random projections can provide an approximation to the l0 norm using α → 0+. We provide an algorithm based on the harmonic mean estimator, which is simple and statistically optimal. Its tail bounds are sharper than the bounds derived based on the geometric mean. We also discover a (possibly surprising) fact: in boolean data, stable random projections using α = 0+ with the harmonic mean estimator will be about twice as accurate as (l2) normal random projections. Because high-dimensional boolean data are common, we expect this fact will be practically quite useful. • The precise theoretical analysis and practical implications We provide the precise constants in the tail bounds for both the geometric mean and harmonic mean estimators. We also provide the variances (either exact or asymptotic) for the proposed estimators. These results can assist practitioners to choose sample sizes accurately.
Scale-Space and Edge Detection Using Anisotropic Diffusion A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image.
Reconstruction of a low-rank matrix in the presence of Gaussian noise. This paper addresses the problem of reconstructing a low-rank signal matrix observed with additive Gaussian noise. We first establish that, under mild assumptions, one can restrict attention to orthogonally equivariant reconstruction methods, which act only on the singular values of the observed matrix and do not affect its singular vectors. Using recent results in random matrix theory, we then propose a new reconstruction method that aims to reverse the effect of the noise on the singular value decomposition of the signal matrix. In conjunction with the proposed reconstruction method we also introduce a Kolmogorov–Smirnov based estimator of the noise variance.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
QoE Aware Service Delivery in Distributed Environment Service delivery and customer satisfaction are strongly related items for a correct commercial management platform. Technical aspects targeting this issue relate to QoS parameters that can be handled by the platform, at least partially. Subjective psychological issues and human cognitive aspects are typically unconsidered aspects and they directly determine the Quality of Experience (QoE). These factors finally have to be considered as key input for a successful business operation between a customer and a company. In our work, a multi-disciplinary approach is taken to propose a QoE interaction model based on the theoretical results from various fields including pyschology, cognitive sciences, sociology, service ecosystem and information technology. In this paper a QoE evaluator is described for assessing the service delivery in a distributed and integrated environment on per user and per service basis.
A Strategic Benchmarking Process For Identifying The Best Practice Collaborative Electronic Government Architecture The rapid growth of the Internet has given rise to electronic government (e-government) which enhances communication, coordination, and collaboration between government, business partners, and citizens. An increasing number of national, state, and local government agencies are realizing the benefits of e-government. The transformation of policies, procedures, and people, which is the essence of e-government, cannot happen by accident. An e-government architecture is needed to structure the system, its functions, its processes, and the environment within which it will live. When confronted by the range of e-government architectures, government agencies struggle to identify the one most appropriate to their needs. This paper proposes a novel strategic benchmarking process utilizing the simple additive weighting method (SAW), real options analysis (ROA), and fuzzy sets to benchmark the best practice collaborative e-government architectures based on three perspectives: Government-to-Citizen (G2C), Government-to-Business (G2B), and Government-to-Government (G2G). The contribution of the proposed method is fourfold: (1) it addresses the gaps in the e-government literature on the effective and efficient assessment of the e-government architectures; (2) it provides a comprehensive and systematic framework that combines ROA with SAW; (3) it considers fuzzy logic and fuzzy sets to represent ambiguous, uncertain or imprecise information; and (4) it is applicable to international, national, Regional, state/provincial, and local e-government levels.
On Fuzziness, Its Homeland and Its Neighbour
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
Subcell resolution in simplex stochastic collocation for spatial discontinuities Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC-SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers' equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC-SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.
Predictive RANS simulations via Bayesian Model-Scenario Averaging The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
Numerical solution of the Stratonovich- and Ito-Euler equations: Application to the stochastic piston problem We consider a piston with a velocity perturbed by Brownian motion moving into a straight tube filled with a perfect gas at rest. The shock generated ahead of the piston can be located by solving the one-dimensional Euler equations driven by white noise using the Stratonovich or Ito formulations. We approximate the Brownian motion with its spectral truncation and subsequently apply stochastic collocation using either sparse grid or the quasi-Monte Carlo (QMC) method. In particular, we first transform the Euler equations with an unsteady stochastic boundary into stochastic Euler equations over a fixed domain with a time-dependent stochastic source term. We then solve the transformed equations by splitting them up into two parts, i.e., a 'deterministic part' and a 'stochastic part'. Numerical results verify the Stratonovich-Euler and Ito-Euler models against stochastic perturbation results, and demonstrate the efficiency of sparse grid and QMC for small and large random piston motions, respectively. The variance of shock location of the piston grows cubically in the case of white noise in contrast to colored noise reported in [1], where the variance of shock location grows quadratically with time for short times and linearly for longer times.
Simplex-stochastic collocation method with improved scalability The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.
A Posteriori Error Analysis of Parameterized Linear Systems Using Spectral Methods We develop computable a posteriori error estimates for the pointwise evaluation of linear functionals of a solution to a parameterized linear system of equations. These error estimates are based on a variational analysis applied to polynomial spectral methods for forward and adjoint problems. We also use this error estimate to define an improved linear functional and we prove that this improved functional converges at a much faster rate than the original linear functional given a pointwise convergence assumption on the forward and adjoint solutions. The advantage of this method is that we are able to use low order spectral representations for the forward and adjoint systems to cheaply produce linear functionals with the accuracy of a higher order spectral representation. The method presented in this paper also applies to the case where only the convergence of the spectral approximation to the adjoint solution is guaranteed. We present numerical examples showing that the error in this improved functional is often orders of magnitude smaller. We also demonstrate that in higher dimensions, the computational cost required to achieve a given accuracy is much lower using the improved linear functional.
Sparse grid collocation schemes for stochastic natural convection problems In recent years, there has been an interest in analyzing and quantifying the effects of random inputs in the solution of partial differential equations that describe thermal and fluid flow problems. Spectral stochastic methods and Monte-Carlo based sampling methods are two approaches that have been used to analyze these problems. As the complexity of the problem or the number of random variables involved in describing the input uncertainties increases, these approaches become highly impractical from implementation and convergence points-of-view. This is especially true in the context of realistic thermal flow problems, where uncertainties in the topology of the boundary domain, boundary flux conditions and heterogeneous physical properties usually require high-dimensional random descriptors. The sparse grid collocation method based on the Smolyak algorithm offers a viable alternate method for solving high-dimensional stochastic partial differential equations. An extension of the collocation approach to include adaptive refinement in important stochastic dimensions is utilized to further reduce the numerical effort necessary for simulation. We show case the collocation based approach to efficiently solve natural convection problems involving large stochastic dimensions. Equilibrium jumps occurring due to surface roughness and heterogeneous porosity are captured. Comparison of the present method with the generalized polynomial chaos expansion and Monte-Carlo methods are made.
High-Order Collocation Methods for Differential Equations with Random Inputs Recently there has been a growing interest in designing efficient methods for the solution of ordinary/partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling methods. However, when the governing equations take complicated forms, numerical implementations of stochastic Galerkin methods can become nontrivial and care is needed to design robust and efficient solvers for the resulting equations. On the other hand, the traditional sampling methods, e.g., Monte Carlo methods, are straightforward to implement, but they do not offer convergence as fast as stochastic Galerkin methods. In this paper, a high-order stochastic collocation approach is proposed. Similar to stochastic Galerkin methods, the collocation methods take advantage of an assumption of smoothness of the solution in random space to achieve fast convergence. However, the numerical implementation of stochastic collocation is trivial, as it requires only repetitive runs of an existing deterministic solver, similar to Monte Carlo methods. The computational cost of the collocation methods depends on the choice of the collocation points, and we present several feasible constructions. One particular choice, based on sparse grids, depends weakly on the dimensionality of the random space and is more suitable for highly accurate computations of practical applications with large dimensional random inputs. Numerical examples are presented to demonstrate the accuracy and efficiency of the stochastic collocation methods.
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
A generic quantitative relationship between quality of experience and quality of service Quality of experience ties together user perception, experience, and expectations to application and network performance, typically expressed by quality of service parameters. Quantitative relationships between QoE and QoS are required in order to be able to build effective QoE control mechanisms onto measurable QoS parameters. Against this background, this article proposes a generic formula in which QoE and QoS parameters are connected through an exponential relationship, called IQX hypothesis. The formula relates changes of QoE with respect to QoS to the current level of QoE, is simple to match, and its limit behaviors are straightforward to interpret. It validates the IQX hypothesis for streaming services, where QoE in terms of Mean Opinion Scores is expressed as functions of loss and reordering ratio, the latter of which is caused by jitter. For web surfing as the second application area, matchings provided by the IQX hypothesis are shown to outperform previously published logarithmic functions. We conclude that the IQX hypothesis is a strong candidate to be taken into account when deriving relationships between QoE and QoS parameters.
Real-time constrained TCP-compatible rate control for video over the Internet This paper describes a rate control algorithm that captures not only the behavior of TCP's congestion control avoidance mechanism but also the delay constraints of real-time streams. Building upon the TFRC protocol , a new protocol has been designed for estimating the bandwidth prediction model parameters. Making use of RTP and RTCP, this protocol allows to better take into account the multimedia flows characteristics (variable packet size, delay ...). Given the current channel state estimated by the above protocol, encoder and decoder buffers states as well as delay constraints of the real-time video source are translated into encoder rate constraints. This global rate control model, coupled with an H.263+ loss resilient video compression algorithm, has been extensively experimented with on various Internet links. The experiments clearly demonstrate the benefits of 1/ the new protocol used for estimating the bandwidth prediction model parameters, adapted to multimedia flows characteristics, and of 2/ the global rate control model encompassing source buffers and end-to-end delay characteristics. The overall system leads to reduce significantly the source timeouts, hence to minimize the expected distortion, for a comparable usage of the TCP-compatible predicted bandwidth.
Real-time compressive tracking It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. While much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, these mis-aligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from the multi-scale image feature space with data-independent basis. Our appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is adopted to efficiently extract the features for the appearance model. We compress samples of foreground targets and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art algorithms on challenging sequences in terms of efficiency, accuracy and robustness.
Fuzzy spatial relationships for image processing and interpretation: a review In spatial reasoning, relationships between spatial entities play a major role. In image interpretation, computer vision and structural recognition, the management of imperfect information and of imprecision constitutes a key point. This calls for the framework of fuzzy sets, which exhibits nice features to represent spatial imprecision at different levels, imprecision in knowledge and knowledge representation, and which provides powerful tools for fusion, decision-making and reasoning. In this paper, we review the main fuzzy approaches for defining spatial relationships including topological (set relationships, adjacency) and metrical relations (distances, directional relative position).
Efficient Euclidean projections in linear time We consider the problem of computing the Euclidean projection of a vector of length n onto a closed convex set including the l1 ball and the specialized polyhedra employed in (Shalev-Shwartz & Singer, 2006). These problems have played building block roles in solving several l1-norm based sparse learning problems. Existing methods have a worst-case time complexity of O(n log n). In this paper, we propose to cast both Euclidean projections as root finding problems associated with specific auxiliary functions, which can be solved in linear time via bisection. We further make use of the special structure of the auxiliary functions, and propose an improved bisection algorithm. Empirical studies demonstrate that the proposed algorithms are much more efficient than the competing ones for computing the projections.
Implementing Competitive Learning in a Quantum System Ideas from quantum computation are applied to the field of neural networks to produce competitive learning in a quantum system. The resulting quantum competitive learner has a prototype storage capacity that is exponentially greater than that of its classical counterpart. Further, empirical results from simulation of the quantum competitive learning system on real-world data sets demonstrate the quantum system's potential for excellent performance.
1.105
0.11
0.1
0.05
0.02625
0.001505
0.000468
0
0
0
0
0
0
0
Weapon selection using the AHP and TOPSIS methods under fuzzy environment The weapon selection problem is a strategic issue and has a significant impact on the efficiency of defense systems. On the other hand, selecting the optimal weapon among many alternatives is a multi-criteria decision-making (MCDM) problem. This paper develops an evaluation model based on the analytic hierarchy process (AHP) and the technique for order performance by similarity to ideal solution (TOPSIS), to help the actors in defence industries for the selection of optimal weapon in a fuzzy environment where the vagueness and subjectivity are handled with linguistic values parameterized by triangular fuzzy numbers. The AHP is used to analyze the structure of the weapon selection problem and to determine weights of the criteria, and fuzzy TOPSIS method is used to obtain final ranking. A real world application is conducted to illustrate the utilization of the model for the weapon selection problem. The application could be interpreted as demonstrating the effectiveness and feasibility of the proposed model.
A fuzzy MCDM approach for evaluating banking performance based on Balanced Scorecard The paper proposed a Fuzzy Multiple Criteria Decision Making (FMCDM) approach for banking performance evaluation. Drawing on the four perspectives of a Balanced Scorecard (BSC), this research first summarized the evaluation indexes synthesized from the literature relating to banking performance. Then, for screening these indexes, 23 indexes fit for banking performance evaluation were selected through expert questionnaires. Furthermore, the relative weights of the chosen evaluation indexes were calculated by Fuzzy Analytic Hierarchy Process (FAHP). And the three MCDM analytical tools of SAW, TOPSIS, and VIKOR were respectively adopted to rank the banking performance and improve the gaps with three banks as an empirical example. The analysis results highlight the critical aspects of evaluation criteria as well as the gaps to improve banking performance for achieving aspired/desired level. It shows that the proposed FMCDM evaluation model of banking performance using the BSC framework can be a useful and effective assessment tool.
Developing global managers’ competencies using the fuzzy DEMATEL method Modern global managers are required to possess a set of competencies or multiple intelligences in order to meet pressing business challenges. Hence, expanding global managers’ competencies is becoming an important issue. Many scholars and specialists have proposed various competency models containing a list of required competencies. But it is hard for someone to master a broad set of competencies at the same time. Here arises an imperative issue on how to enrich global managers’ competencies by way of segmenting a set of competencies into some portions in order to facilitate competency development with a stepwise mode. To solve this issue involving the vagueness of human judgments, we have proposed an effective method combining fuzzy logic and Decision Making Trial and Evaluation Laboratory (DEMATEL) to segment required competencies for better promoting the competency development of global managers. Additionally, an empirical study is presented to illustrate the application of the proposed method.
Facility location selection using fuzzy topsis under group decisions This work presents a fuzzy TOPSIS model under group decisions for solving the facility location selection problem, where the ratings of various alternative locations under different subjective attributes and the importance weights of all attributes are assessed in linguistic values represented by fuzzy numbers. The objective attributes are transformed into dimensionless indices to ensure compatibility with the linguistic ratings of the subjective attributes. Furthermore, the membership function of the aggregation of the ratings and weights for each alternative location versus each attribute can be developed by interval arithmetic and α -cuts of fuzzy numbers. The ranking method of the mean of the integral values is applied to help derive the ideal and negative-ideal fuzzy solutions to complete the proposed fuzzy TOPSIS model. Finally, a numerical example demonstrates the computational process of the proposed model.
Extension of the ELECTRE method based on interval-valued fuzzy sets Decision-making is the process of finding the best option among the feasible alternatives. In classical multiple criteria decision-making (MCDM) methods, the ratings and the weights of the criteria are known precisely. However, if decision makers cannot reach an agreement on the method of defining linguistic variables based on the fuzzy sets, the interval-valued fuzzy set theory can provide a more accurate modeling. In this paper, the interval-valued fuzzy ELECTRE method is presented aiming at solving MCDM problems in which the weights of criteria are unequal, using interval-valued fuzzy set concepts. For the purpose of proving the validity of the proposed model, we present a numerical example and build a practical maintenance strategy selection problem.
Extension of fuzzy TOPSIS method based on interval-valued fuzzy sets Decision making is one of the most complex administrative processes in management. In circumstances where the members of the decision making team are uncertain in determining and defining the decision making criteria, fuzzy theory provides a proper tool to encounter with such uncertainties. However, if decision makers cannot reach an agreement on the method of defining linguistic variables based on the fuzzy sets, the interval-valued fuzzy set theory can provide a more accurate modeling. In this paper the interval-valued fuzzy TOPSIS method is presented aiming at solving MCDM problems in which the weights of criteria are unequal, using interval-valued fuzzy sets concepts.
An Interval Type-2 Fuzzy Logic System To Translate Between Emotion-Related Vocabularies This paper describes a novel experiment that demonstrates the feasiblity of a fuzzy logic (FL) representation of emotion-related words used to translate between different emotional vocabularies. Type-2 fuzzy sets were encoded using input from web-based surveys that prompted users with emotional words and asked them to enter an interval using a double slider. The similarity of the encoded fuzzy sets was computed and it was shown that a reliable [napping can be made between a large vocabulary of emotional words and a smaller vocabulary of words naming seven emotion categories. Though the mapping results are comparable to Euclidian distance in the valence/activation/dominance space, the FL representation has several benefits that are discussed.
Incorporating filtering techniques in a fuzzy linguistic multi-agent model for information gathering on the web In (Computing with Words, Wiley, New York, 2001, p. 251; Soft Comput. 6 (2002) 320; Fuzzy Logic and The Internet, Physica-Verlag, Springer, Wurzburg, Berlin, 2003) we presented different fuzzy linguistic multi-agent models for helping users in their information gathering processes on the Web. In this paper we describe a new fuzzy linguistic multi-agent model that incorporates two information filtering techniques in its structure: a content-based filtering agent and a collaborative filtering agent. Both elements are introduced to increase the information filtering possibilities of multi-agent system on the Web and, in such a way, to improve its retrieval issues.
Type-2 Fuzzy Sets as Functions on Spaces For many readers and potential authors, type-2 (T2) fuzzy sets might be more readily understood if expressed by the use of standard mathematical notation and terminology. This paper, therefore, translates constructs associated with T2 fuzzy sets to the language of functions on spaces. Such translations may encourage researchers in different disciplines to investigate T2 fuzzy sets, thereby potentially broadening their application and strengthening the underlying theory.
A comparative analysis of score functions for multiple criteria decision making in intuitionistic fuzzy settings The purpose of this paper was to conduct a comparative study of score functions in multiple criteria decision analysis based on intuitionistic fuzzy sets. The concept of score functions has been conceptualized and widely applied to multi-criteria decision-making problems. There are several types of score functions that can identify the mixed results of positive and negative parts in a bi-dimensional framework of intuitionistic fuzzy sets. Considering various perspectives on score functions, the present study adopts an order of preference based on similarity to the ideal solution as the main structure to estimate the importance of different criteria and compute optimal multi-criteria decisions in intuitionistic fuzzy evaluation settings. An experimental analysis is conducted to examine the relationship between the results yielded by different score functions, considering the average Spearman correlation coefficients and contradiction rates. Furthermore, additional discussions clarify the relative differences in the ranking orders obtained from different combinations of numbers of alternatives and criteria as well as different importance conditions.
Numerical Integration using Sparse Grids We present and review algorithms for the numerical integration of multivariatefunctions defined over d--dimensional cubes using several variantsof the sparse grid method first introduced by Smolyak [51]. In this approach,multivariate quadrature formulas are constructed using combinationsof tensor products of suited one--dimensional formulas. The computingcost is almost independent of the dimension of the problem if thefunction under consideration has bounded mixed derivatives. We suggest...
Analysis Of Hierarchical B Pictures And Mctf In this paper, an investigation of H.264/MPEG4-AVC conforming coding with hierarchical B pictures is presented. We analyze the coding delay and memory requirements, describe details of an improved encoder control, and compare the coding efficiency for different coding delays. Additionally, the coding efficiency of hierarchical B picture coding is compared to that of MCTF-based coding by using identical coding structures and a similar degree of encoder optimization. Our simulation results turned out that in comparison to the widely used IBBP... structure coding gains of more than 1 dB can be achieved at the expense of an increased coding delay. Further experiments have shown that the coding efficiency gains obtained by using the additional update steps in MCTF coding are generally smaller than the losses resulting from the required open-loop encoder control.
Filters of residuated lattices and triangle algebras An important concept in the theory of residuated lattices and other algebraic structures used for formal fuzzy logic, is that of a filter. Filters can be used, amongst others, to define congruence relations. Specific kinds of filters include Boolean filters and prime filters. In this paper, we define several different filters of residuated lattices and triangle algebras and examine their mutual dependencies and connections. Triangle algebras characterize interval-valued residuated lattices.
Sparsity Regularization for Radon Measures In this paper we establish a regularization method for Radon measures. Motivated from sparse L 1 regularization we introduce a new regularization functional for the Radon norm, whose properties are then analyzed. We, furthermore, show well-posedness of Radon measure based sparsity regularization. Finally we present numerical examples along with the underlying algorithmic and implementation details. We shall, here, see that the number of iterations turn out of utmost importance when it comes to obtain reliable reconstructions of sparse data with varying intensities.
1.134566
0.006891
0.005927
0.004143
0.003057
0.001709
0.000494
0.00017
0.000075
0.000016
0
0
0
0
Unified full implication algorithms of fuzzy reasoning This paper discusses the full implication inference of fuzzy reasoning. For all residuated implications induced by left continuous t-norms, unified @a-triple I algorithms are constructed to generalize the known results. As the corollaries of the main results of this paper, some special algorithms can be easily derived based on four important residuated implications. These algorithms would be beneficial to applications of fuzzy reasoning. Based on properties of residuated implications, the proofs of the many conclusions are greatly simplified.
Contrast of a fuzzy relation In this paper we address a key problem in many fields: how a structured data set can be analyzed in order to take into account the neighborhood of each individual datum. We propose representing the dataset as a fuzzy relation, associating a membership degree with each element of the relation. We then introduce the concept of interval-contrast, a means of aggregating information contained in the immediate neighborhood of each element of the fuzzy relation. The interval-contrast measures the range of membership degrees present in each neighborhood. We use interval-contrasts to define the necessary properties of a contrast measure, construct several different local contrast and total contrast measures that satisfy these properties, and compare our expressions to other definitions of contrast appearing in the literature. Our theoretical results can be applied to several different fields. In an Appendix A, we apply our contrast expressions to photographic images.
Formalization of implication based fuzzy reasoning method Fuzzy reasoning includes a number of important inference methods for addressing uncertainty. This line of fuzzy reasoning forms a common logical foundation in various fields, such as fuzzy logic control and artificial intelligence. The full implication triple I method (a method only based on implication, TI method for short) for fuzzy reasoning is proposed in 1999 to improve the popular CRI method (a hybrid method based on implication and composition). The current paper delves further into the TI method, and a sound logical foundation is set for the TI method based on the monoidal t-norm based logical system MTL.
An Approach To Interval-Valued R-Implications And Automorphisms The aim of this work is to introduce an approach for interval-valued R-implications, which satisfy some analogous properties of R-implications. We show that the best interval representation of an R-implication that is obtained from a left continuous t-norm coincides with the interval-valued R-implication obtained from the best interval representation of such t-norm, whenever this is an inclusion monotonic interval function. This provides, under this condition, a nice characterization for the best interval representation of an R-implication, which is also an interval-valued R-implication. We also introduce interval-valued automorphisms as the best interval representations of automorphisms. It is shown that interval automorphisms act on interval R-implications, generating other interval R-implications.
Interval-valued Fuzzy Sets, Possibility Theory and Imprecise Probability Interval-valued fuzzy sets were proposed thirty years ago as a natural extension of fuzzy sets. Many variants of these mathematical objects ex- ist, under various names. One popular variant proposed by Atanassov starts by the specification of membership and non-membership functions. This paper focuses on interpretations of such ex- tensions of fuzzy sets, whereby the two member- ship functions that define them can be justified in the scope of some information representation paradigm. It particularly focuses on a recent pro- posal by Neumaier, who proposes to use interval- valued fuzzy sets under the name "clouds", as an e! cient method to represent a family of proba- bilities. We show the connection between clouds, interval-valued fuzzy sets and possibility theory.
An approximate analogical reasoning approach based on similarity measures An approximate analogical reasoning schema (AARS) which exhibits the advantages of fuzzy set theory and analogical reasoning in expert systems development is described. The AARS avoids going through the conceptually complicated compositional rule of inference. It uses a similarity measure of fuzzy sets as well as a threshold to determine whether a rule should be fired and a modification function inferred from a similarity measure to deduce a consequent. Some numerical examples to illustrate the operation of the schema are presented. Finally, the proposed schema is compared with conventional expert systems and existing fuzzy expert systems
Preservation Of Properties Of Interval-Valued Fuzzy Relations The goal of this paper is to consider properties of the composition of interval-valued fuzzy relations which were introduced by L.A. Zadeh in 1975. Fuzzy set theory turned out to be a useful tool to describe situations in which the data are imprecise or vague. Interval-valued fuzzy set theory is a generalization of fuzzy set theory which was introduced also by Zadeh in 1965. This paper generalizes some properties of interval matrices considered by Pekala (2007) on these of interval-valued fuzzy relations.
Some aspects of intuitionistic fuzzy sets We first discuss the significant role that duality plays in many aggregation operations involving intuitionistic fuzzy subsets. We then consider the extension to intuitionistic fuzzy subsets of a number of ideas from standard fuzzy subsets. In particular we look at the measure of specificity. We also look at the problem of alternative selection when decision criteria satisfaction is expressed using intuitionistic fuzzy subsets. We introduce a decision paradigm called the method of least commitment. We briefly look at the problem of defuzzification of intuitionistic fuzzy subsets.
A behavioural model for vague probability assessments I present an hierarchical uncertainty model that is able to represent vague probability assessments, and to make inferences based on them. This model can be given an interpretation in terms of the behaviour of a modeller in the face of uncertainty, and is based on Walley's theory of imprecise probabilities. It is formally closely related to Zadeh's fuzzy probabilities, but it has a different interpretation, and a different calculus. Through rationality (coherence) arguments, the hierarchical model is shown to lead to an imprecise first-order uncertainty model that can be used in decision making, and as a prior in statistical reasoning.
Fuzzy multiple criteria forestry decision making based on an integrated VIKOR and AHP approach Forestation and forest preservation in urban watersheds are issues of vital importance as forested watersheds not only preserve the water supplies of a city but also contribute to soil erosion prevention. The use of fuzzy multiple criteria decision aid (MCDA) in urban forestation has the advantage of rendering subjective and implicit decision making more objective and transparent. An additional merit of fuzzy MCDA is its ability to accommodate quantitative and qualitative data. In this paper an integrated VIKOR-AHP methodology is proposed to make a selection among the alternative forestation areas in Istanbul. In the proposed methodology, the weights of the selection criteria are determined by fuzzy pairwise comparison matrices of AHP. It is found that Omerli watershed is the most appropriate forestation district in Istanbul.
A new fuzzy connectivity class application to structural recognition in images Fuzzy sets theory constitutes a poweful tool, that can lead to more robustness in problems such as image segmentation and recognition. This robustness results to some extent from the partial recovery of the continuity that is lost during digitization. Here we deal with fuzzy connectivity notions. We show that usual fuzzy connectivity definitions have some drawbacks, and we propose a new definition, based on the notion of hyperconnection, that exhibits better properties, in particular in terms of continuity. We illustrate the potential use of this definition in a recognition procedure based on connected filters. A max-tree representation is also used, in order to deal efficiently with the proposed connectivity.
Transcending Taxonomies with Generic and Agent-Based e-Hub Architectures
Knowledge Extraction From Neural Networks Using the All-Permutations Fuzzy Rule Base: The LED Display Recognition Problem A major drawback of artificial neural networks (ANNs) is their black-box character. Even when the trained network performs adequately, it is very difficult to understand its operation. In this letter, we use the mathematical equivalence between ANNs and a specific fuzzy rule base to extract the knowledge embedded in the network. We demonstrate this using a benchmark problem: the recognition of digits produced by a light emitting diode (LED) device. The method provides a symbolic and comprehensible description of the knowledge learned by the network during its training.
Overview of HEVC High-Level Syntax and Reference Picture Management The increasing proportion of video traffic in telecommunication networks puts an emphasis on efficient video compression technology. High Efficiency Video Coding (HEVC) is the forthcoming video coding standard that provides substantial bit rate reductions compared to its predecessors. In the HEVC standardization process, technologies such as picture partitioning, reference picture management, and parameter sets are categorized as “high-level syntax.” The design of the high-level syntax impacts the interface to systems and error resilience, and provides new functionalities. This paper presents an overview of the HEVC high-level syntax, including network abstraction layer unit headers, parameter sets, picture partitioning schemes, reference picture management, and supplemental enhancement information messages.
1.063394
0.043739
0.04
0.020617
0.005496
0.002503
0.000649
0.000108
0.000013
0.000001
0
0
0
0
The Vienna Definition Language
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
Artificial Paranoia
Equational Languages
Fuzzy Algorithms
A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory
Dynamic system modeling using a recurrent interval-valued fuzzy neural network and its hardware implementation This paper first proposes a new recurrent interval-valued fuzzy neural network (RIFNN) for dynamic system modeling. A new hardware implementation technique for the RIFNN using a field-programmable gate array (FPGA) chip is then proposed. The antecedent and consequent parts in an RIFNN use interval-valued fuzzy sets in order to increase the network noise resistance ability. A new recurrent structure is proposed in RIFNN, with the recurrent loops enabling it to handle dynamic system processing problems. An RIFNN is constructed from structure and parameter learning. For hardware implementation of the RIFNN, the pipeline technique and a new circuit for type-reduction operation are proposed to improve the chip performance. Simulations and comparisons with various feedforward and recurrent fuzzy neural networks verify the performance of the RIFNN under noisy conditions.
Development of a type-2 fuzzy proportional controller Studies have shown that PID controllers can be realized by type-1 (conventional) fuzzy logic systems (FLSs). However, the input-output mappings of such fuzzy PID controllers are fixed. The control performance would, therefore, vary if the system parameters are uncertain. This paper aims at developing a type-2 FLS to control a process whose parameters are uncertain. A method for designing type-2 triangular membership functions with the desired generalized centroid is first proposed. By using this type-2 fuzzy set to partition the output domain, a type-2 fuzzy proportional controller is obtained. It is shown that the type-2 fuzzy logic system is equivalent to a proportional controller that may assume a range of gains. Simulation results are presented to demonstrate that the performance of the proposed controller can be maintained even when the system parameters deviate from their nominal values.
A hybrid multi-criteria decision-making model for firms competence evaluation In this paper, we present a hybrid multi-criteria decision-making (MCDM) model to evaluate the competence of the firms. According to the competence-based theory reveals that firm competencies are recognized from exclusive and unique capabilities that each firm enjoy in marketplace and are tightly intertwined within different business functions throughout the company. Therefore, competence in the firm is a composite of various attributes. Among them many intangible and tangible attributes are difficult to measure. In order to overcome the issue, we invite fuzzy set theory into the measurement of performance. In this paper first we calculate the weight of each criterion through adaptive analytic hierarchy process (AHP) approach (A^3) method, and then we appraise the performance of firms via linguistic variables which are expressed as trapezoidal fuzzy numbers. In the next step we transform these fuzzy numbers into interval data by means of @a-cut. Then considering different values for @a we rank the firms through TOPSIS method with interval data. Since there are different ranks for different @a values, we apply linear assignment method to obtain final rank for alternatives.
Fuzzy decision making with immediate probabilities We developed a new decision-making model with probabilistic information and used the concept of the immediate probability to aggregate the information. This type of probability modifies the objective probability by introducing the attitudinal character of the decision maker. In doing so, we use the ordered weighting average (OWA) operator. When using this model, it is assumed that the information is given by exact numbers. However, this may not be the real situation found within the decision-making problem. Sometimes, the information is vague or imprecise and it is necessary to use another approach to assess the information, such as the use of fuzzy numbers. Then, the decision-making problem can be represented more completely because we now consider the best and worst possible scenarios, along with the possibility that some intermediate event (an internal value) will occur. We will use the fuzzy ordered weighted averaging (FOWA) operator to aggregate the information with the probabilities. As a result, we will get the Immediate Probability-FOWA (IP-FOWA) operator. We will study some of its main properties. We will apply the new approach in a decision-making problem about selection of strategies.
The algebra of fuzzy truth values The purpose of this paper is to give a straightforward mathematical treatment of algebras of fuzzy truth values for type-2 fuzzy sets.
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
A fuzzy logic system for the detection and recognition of handwritten street numbers Fuzzy logic is applied to the problem of locating and reading street numbers in digital images of handwritten mail. A fuzzy rule-based system is defined that uses uncertain information provided by image processing and neural network-based character recognition modules to generate multiple hypotheses with associated confidence values for the location of the street number in an image of a handwritten address. The results of a blind test of the resultant system are presented to demonstrate the value of this new approach. The results are compared to those obtained using a neural network trained with backpropagation. The fuzzy logic system achieved higher performance rates
A possibilistic approach to the modeling and resolution of uncertain closed-loop logistics Closed-loop logistics planning is an important tactic for the achievement of sustainable development. However, the correlation among the demand, recovery, and landfilling makes the estimation of their rates uncertain and difficult. Although the fuzzy numbers can present such kinds of overlapping phenomena, the conventional method of defuzzification using level-cut methods could result in the loss of information. To retain complete information, the possibilistic approach is adopted to obtain the possibilistic mean and mean square imprecision index (MSII) of the shortage and surplus for uncertain factors. By applying the possibilistic approach, a multi-objective, closed-loop logistics model considering shortage and surplus is formulated. The two objectives are to reduce both the total cost and the root MSII. Then, a non-dominated solution can be obtained to support decisions with lower perturbation and cost. Also, the information on prediction interval can be obtained from the possibilistic mean and root MSII to support the decisions in the uncertain environment. This problem is non-deterministic polynomial-time hard, so a new algorithm based on the spanning tree-based genetic algorithm has been developed. Numerical experiments have shown that the proposed algorithm can yield comparatively efficient and accurate results.
1.200022
0.200022
0.200022
0.200022
0.066689
0.006263
0.000033
0.000026
0.000023
0.000019
0.000014
0
0
0
An optimal algorithm for approximate nearest neighbor searching fixed dimensions Consider a set of S of n data points in real d-dimensional space, Rd, where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess S into a data structure, so that given any query point q ∈ Rd, is the closest point of S to q can be reported quickly. Given any positive real &egr;, data point p is a (1 +&egr;)-approximate nearest neighbor of q if its distance from q is within a factor of (1 + &egr;) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in Rd in O(dn log n) time and O(dn) space, so that given a query point q ∈ Rd, and &egr; 0, a (1 + &egr;)-approximate nearest neighbor of q can be computed in O(cd, &egr; log n) time, where cd,&egr;≤d 1 + 6d/e;d is a factor depending only on dimension and &egr;. In general, we show that given an integer k ≥ 1, (1 + &egr;)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n) time.
Multidimensional spectral hashing With the growing availability of very large image databases, there has been a surge of interest in methods based on "semantic hashing", i.e. compact binary codes of data-points so that the Hamming distance between codewords correlates with similarity. In reviewing and comparing existing methods, we show that their relative performance can change drastically depending on the definition of ground-truth neighbors. Motivated by this finding, we propose a new formulation for learning binary codes which seeks to reconstruct the affinity between datapoints, rather than their distances. We show that this criterion is intractable to solve exactly, but a spectral relaxation gives an algorithm where the bits correspond to thresholded eigenvectors of the affinity matrix, and as the number of datapoints goes to infinity these eigenvectors converge to eigenfunctions of Laplace-Beltrami operators, similar to the recently proposed Spectral Hashing (SH) method. Unlike SH whose performance may degrade as the number of bits increases, the optimal code using our formulation is guaranteed to faithfully reproduce the affinities as the number of bits increases. We show that the number of eigenfunctions needed may increase exponentially with dimension, but introduce a "kernel trick" to allow us to compute with an exponentially large number of bits but using only memory and computation that grows linearly with dimension. Experiments shows that MDSH outperforms the state-of-the art, especially in the challenging regime of small distance thresholds.
Latent semantic sparse hashing for cross-modal similarity search Similarity search methods based on hashing for effective and efficient cross-modal retrieval on large-scale multimedia databases with massive text and images have attracted considerable attention. The core problem of cross-modal hashing is how to effectively construct correlation between multi-modal representations which are heterogeneous intrinsically in the process of hash function learning. Analogous to Canonical Correlation Analysis (CCA), most existing cross-modal hash methods embed the heterogeneous data into a joint abstraction space by linear projections. However, these methods fail to bridge the semantic gap more effectively, and capture high-level latent semantic information which has been proved that it can lead to better performance for image retrieval. To address these challenges, in this paper, we propose a novel Latent Semantic Sparse Hashing (LSSH) to perform cross-modal similarity search by employing Sparse Coding and Matrix Factorization. In particular, LSSH uses Sparse Coding to capture the salient structures of images, and Matrix Factorization to learn the latent concepts from text. Then the learned latent semantic features are mapped to a joint abstraction space. Moreover, an iterative strategy is applied to derive optimal solutions efficiently, and it helps LSSH to explore the correlation between multi-modal representations efficiently and automatically. Finally, the unified hashcodes are generated through the high level abstraction space by quantization. Extensive experiments on three different datasets highlight the advantage of our method under cross-modal scenarios and show that LSSH significantly outperforms several state-of-the-art methods.
Sequential spectral learning to hash with multiple representations Learning to hash involves learning hash functions from a set of images for embedding high-dimensional visual descriptors into a similarity-preserving low-dimensional Hamming space. Most of existing methods resort to a single representation of images, that is, only one type of visual descriptors is used to learn a hash function to assign binary codes to images. However, images are often described by multiple different visual descriptors (such as SIFT, GIST, HOG), so it is desirable to incorporate these multiple representations into learning a hash function, leading to multi-view hashing. In this paper we present a sequential spectral learning approach to multi-view hashing where a hash function is sequentially determined by solving the successive maximization of local variances subject to decorrelation constraints. We compute multi-view local variances by α-averaging view-specific distance matrices such that the best averaged distance matrix is determined by minimizing its α-divergence from view-specific distance matrices. We also present a scalable implementation, exploiting a fast approximate k-NN graph construction method, in which α-averaged distances computed in small partitions determined by recursive spectral bisection are gradually merged in conquer steps until whole examples are used. Numerical experiments on Caltech-256, CIFAR-20, and NUS-WIDE datasets confirm the high performance of our method, in comparison to single-view spectral hashing as well as existing multi-view hashing methods.
Inter-media hashing for large-scale retrieval from heterogeneous data sources In this paper, we present a new multimedia retrieval paradigm to innovate large-scale search of heterogenous multimedia data. It is able to return results of different media types from heterogeneous data sources, e.g., using a query image to retrieve relevant text documents or images from different data sources. This utilizes the widely available data from different sources and caters for the current users' demand of receiving a result list simultaneously containing multiple types of data to obtain a comprehensive understanding of the query's results. To enable large-scale inter-media retrieval, we propose a novel inter-media hashing (IMH) model to explore the correlations among multiple media types from different data sources and tackle the scalability issue. To this end, multimedia data from heterogeneous data sources are transformed into a common Hamming space, in which fast search can be easily implemented by XOR and bit-count operations. Furthermore, we integrate a linear regression model to learn hashing functions so that the hash codes for new data points can be efficiently generated. Experiments conducted on real-world large-scale multimedia datasets demonstrate the superiority of our proposed method compared with state-of-the-art techniques.
Restricted Isometries for Partial Random Circulant Matrices In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the sth-order restricted isometry constant is small when the number m of samples satisfies m≳(slogn)3/2, where n is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling.
Stability Results for Random Sampling of Sparse Trigonometric Polynomials Recently, it has been observed that a sparse trigonometric polynomial, i.e., having only a small number of nonzero coefficients, can be reconstructed exactly from a small number of random samples using basis pursuit (BP) or orthogonal matching pursuit (OMP). In this paper, it is shown that recovery by a BP variant is stable under perturbation of the samples values by noise. A similar partial result for OMP is provided. For BP, in addition, the stability result is extended to (nonsparse) trigonometric polynomials that can be well approximated by sparse ones. The theoretical findings are illustrated by numerical experiments.
Efficient sampling of sparse wideband analog signals Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a full-band front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff is much lower than the Nyquist rate. The problem of recovering the original signal from the low-rate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis.
A Singular Value Thresholding Algorithm for Matrix Completion This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices $\{\boldsymbol{X}^k,\boldsymbol{Y}^k\}$, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix $\boldsymbol{Y}^k$. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates $\{\boldsymbol{X}^k\}$ is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which $1,000\times1,000$ matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for $\ell_1$ minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
An algorithm for generating interpolatory quadrature rules of the highest degree of precision with preassigned nodes for general weight functions The construction of an algorithm is described for generating interpolatory quadrature rules of the highest degree of precision with arbitrarily preassigned nodes for general constant signed weight functions. It is of very wide application in that to operate, only the definition of the 3-term recurrence relation for the orthogonal polynomials associated with the weight function need be supplied. The algorithm can be used to produce specific individual quadrature rules or sequences of rules by iterative application.
Principle hessian direction based parameter reduction for interconnect networks with process variation As CMOS technology enters the nanometer regime, the increasing process variation is bringing manifest impact on circuit performance. To accurately take account of both global and local process variations, a large number of random variables (or parameters) have to be incorporated into circuit models. This measure in turn raises the complexity of the circuit models. The current paper proposes a Principle Hessian Direction (PHD) based parameter reduction approach for interconnect networks. The proposed approach relies on each parameter's impact on circuit performance to decide whether keeping or reducing the parameter. Compared with existing principle component analysis(PCA) method, this performance based property provides us a significantly smaller parameter set after reduction. The experimental results also support our conclusions. In interconnect cases, the proposed method reduces 70% of parameters. In some cases (the mesh example in the current paper), the new approach leads to an 85% reduction. We also tested ISCAS benchmarks. In all cases, an average of 53% of reductionis observed with less than 3% error in mean and less than 8% error in variation.
Hybrid possibilistic networks Possibilistic networks and possibilistic logic are two standard frameworks of interest for representing uncertain pieces of knowledge. Possibilistic networks exhibit relationships between variables while possibilistic logic ranks logical formulas according to their level of certainty. For multiply connected networks, it is well-known that the inference process is a hard problem. This paper studies a new representation of possibilistic networks called hybrid possibilistic networks. It results from combining the two semantically equivalent types of standard representation. We first present a propagation algorithm through hybrid possibilistic networks. This inference algorithm on hybrid networks is strictly more efficient (and confirmed by experimental studies) than the one of standard propagation algorithm.
Vagueness and Blurry Sets. Abstract: This paper presents a new theory of vagueness, which is designed to retain the virtues of the fuzzytheory, while avoiding the problem of higher-order vagueness. The theory presented here accommodates the ideathat for any statement S1 to the e#ect that `Bob is bald' is x true, for x in [0,1], there should be a further statementS2 which tells us how true S1 is, and so on---that is, it accommodates higher-order vagueness---without resortingto the claim that the metalanguage in which the...
A robust periodic arnoldi shooting algorithm for efficient analysis of large-scale RF/MM ICs The verification of large radio-frequency/millimeter-wave (RF/MM) integrated circuits (ICs) has regained attention for high-performance designs beyond 90nm and 60GHz. The traditional time-domain verification by standard Krylov-subspace based shooting method might not be able to deal with newly increased verification complexity. The numerical algorithms with small computational cost yet superior convergence are highly desired to extend designers' creativity to probe those extremely challenging designs of RF/MM ICs. This paper presents a new shooting algorithm for periodic RF/MM-IC systems. Utilizing a periodic structure of the state matrix, a periodic Arnoldi shooting algorithm is developed to exploit the structured Krylov-subspace. This leads to an improved efficiency and convergence. Results from several industrial examples show that the proposed periodic Arnoldi shooting method, called PAS, is 1000 times faster than the direct-LU and the explicit GMRES methods. Moreover, when compared to the existing industrial standard, a matrix-free GMRES with non-structured Krylov-subspace, the new PAS method reduces iteration number and runtime by 3 times with the same accuracy.
1.047609
0.04
0.04
0.022324
0.016432
0.000235
0.000025
0.000004
0
0
0
0
0
0
A provably passive and cost-efficient model for inductive interconnects To reduce the model complexity for inductive interconnects, the vector potential equivalent circuit (VPEC) model was introduced recently and a localized VPEC model was developed based on geometry integration. In this paper, the authors show that the localized VPEC model is not accurate for interconnects with nontrivial sizes. They derive an accurate VPEC model by inverting the inductance matrix under the partial element equivalent circuit (PEEC) model and prove that the effective resistance matrix under the resulting full VPEC model is passive and strictly diagonal dominant. This diagonal dominance enables truncating small-valued off-diagonal elements to obtain a sparsified VPEC model named truncated VPEC (tVPEC) model with guaranteed passivity. To avoid inverting the entire inductance matrix, the authors further present another sparsified VPEC model with preserved passivity, the windowed VPEC (wVPEC) model, based on inverting a number of inductance submatrices. Both full and sparsified VPEC models are SPICE compatible. Experiments show that the full VPEC model is as accurate as the full PEEC model but consumes less simulation time than the full PEEC model does. Moreover, the sparsified VPEC model is orders of magnitude (1000×) faster and produces a waveform with small errors (3%) compared to the full PEEC model, and wVPEC uses less (up to 90×) model building time yet is more accurate compared to the tVPEC model.
Impedance extraction for 3-D structures with multiple dielectrics using preconditioned boundary element method In this paper, we present the first BEM impedance extraction algorithm for multiple dielectrics. The effect of multiple dielectrics is significant and efficient modeling is challenging. However, previous BEM algorithms, including FastImp and FastPep, assume uniform dielectric, thus causing considerable errors. The new algorithm introduces a circuit formulation which makes it possible to utilizes either multilayer Green's function or equivalent charge method to extract impedance in multiple dielectrics. The novelty of the formulation is the reduction of the number of unknowns and the application of the hierarchical data structure. The hierarchical data structure permits efficient sparsification transformation and preconditioners to accelerate the linear equation solver. Experimental results demonstrate that the new algorithm is accurate and efficient. For uniform dielectric problems, the new algorithm is one magnitude faster than FastImp, while its results differ from FastImp within 2%. For multiple dielectrics problems, its relative error with respect to HFSS is below 3%.
Fast Analysis of a Large-Scale Inductive Interconnect by Block-Structure-Preserved Macromodeling To efficiently analyze the large-scale interconnect dominant circuits with inductive couplings (mutual inductances), this paper introduces a new state matrix, called VNA, to stamp inverse-inductance elements by replacing inductive-branch current with flux. The state matrix under VNA is diagonal-dominant, sparse, and passive. To further explore the sparsity and hierarchy at the block level, a new matrix-stretching method is introduced to reorder coupled fluxes into a decoupled state matrix with a bordered block diagonal (BBD) structure. A corresponding block-structure-preserved model-order reduction, called BVOR, is developed to preserve the sparsity and hierarchy of the BBD matrix at the block level. This enables us to efficiently build and simulate the macromodel within a SPICE-like circuit simulator. Experiments show that our method achieves up to 7× faster modeling building time, up to 33× faster simulation time, and as much as 67× smaller waveform error compared to SAPOR [a second-order reduction based on nodal analysis (NA)] and PACT (a first-order 2×2 structured reduction based on modified NA).
A New Parallel Kernel-Independent Fast Multipole Method We present a new adaptive fast multipole algorithm and its parallel implementation. The algorithm is kernel-independent in the sense that the evaluation of pairwise interactions does not rely on any analytic expansions, but only utilizes kernel evaluations. The new method provides the enabling technology for many important problems in computational science and engineering. Examples include viscous flows, fracture mechanics and screened Coulombic interactions. Our MPI-based parallel implementation logically separates the computation and communication phases to avoid synchronization in the upward and downward computation passes, and thus allows us to fully exploit computation and communication overlapping. We measure isogranular and fixed-size scalability for a variety of kernels on the Pittsburgh Supercomputing Center's TCS-1 Alphaserver on up to 3000 processors. We have solved viscous flow problems with up to 2.1 billion unknowns and we have achieved 1.6 Tflops/s peak performance and 1.13 Tflops/s sustained performance.
A fast hierarchical algorithm for 3-D capacitance extraction We presen t a new algorithm for computing the capacitance of three-dimensional perfect electrical conductors of complex structures. The new algorithm is significantly faster and uses muc h less memory than previous best algorithms, and is kernel independent.The new algorithm is based on a hierarchical algorithm for the n-body problem, and is an acceleration of the boundary-element method for solving the integral equation associated with the capacitance extraction problem. The algorithm first adaptively subdivides the conductor surfaces into panels according to an estimation of the potential coefficients and a user-supplied error band. The algorithm stores the poten tial coefficient matrix in a hierarchical data structure of size O(n), although the matrix is size n2 if expanded explicitly, wheren is the n umber of panels. The hierarchical data structure allows us to multiply the coefficient matrix with an y vector in O(n) time. Finally, w e use a generalized minimal residual algorithm to solve m linear systems each of size n × n in O(mn) time, where m is the n umber of conductors.The new algorithm is implemented and the performance is compared with previous best algorithms. F or the k × k bus example, our algorithm is 100 to 40 times faster than F astCap, and uses 1/100 to 1/60 of the memory used by F astCap. The results computed by the new algorithm are within 2.7% from that computed by FastCap.
FastCap: a multipole accelerated 3-D capacitance extraction program A fast algorithm for computing the capacitance of a complicated three-dimensional geometry of ideal conductors in a uniform dielectric is described and its performance in the capacitance extractor FastCap is examined. The algorithm is an acceleration of the boundary-element technique for solving the integral equation associated with the multiconductor capacitance extraction problem. The authors present a generalized conjugate residual iterative algorithm with a multipole approximation to compute the iterates. This combination reduces the complexity so that accurate multiconductor capacitance calculations grow nearly as nm, where m is the number of conductors. Performance comparisons on integrated circuit bus crossing problems show that for problems with as few as 12 conductors the multipole accelerated boundary element method can be nearly 500 times faster than Gaussian-elimination-based algorithms, and five to ten times faster than the iterative method alone, depending on required accuracy
General-Purpose Nonlinear Model-Order Reduction Using Piecewise-Polynomial Representations We present algorithms for automated macromodeling of nonlinear mixed-signal system blocks. A key feature of our methods is that they automate the generation of general-purpose macromodels that are suitable for a wide range of time- and frequency-domain analyses important in mixed-signal design flows. In our approach, a nonlinear circuit or system is approximated using piecewise-polynomial (PWP) representations. Each polynomial system is reduced to a smaller one via weakly nonlinear polynomial model-reduction methods. Our approach, dubbed PWP, generalizes recent trajectory-based piecewise-linear approaches and ties them with polynomial-based model-order reduction, which inherently captures stronger nonlinearities within each region. PWP-generated macromodels not only reproduce small-signal distortion and intermodulation properties well but also retain fidelity in large-signal transient analyses. The reduced models can be used as drop-in replacements for large subsystems to achieve fast system-level simulation using a variety of time- and frequency-domain analyses (such as dc, ac, transient, harmonic balance, etc.). For the polynomial reduction step within PWP, we also present a novel technique [dubbed multiple pseudoinput (MPI)] that combines concepts from proper orthogonal decomposition with Krylov-subspace projection. We illustrate the use of PWP and MPI with several examples (including op-amps and I/O buffers) and provide important implementation details. Our experiments indicate that it is easy to obtain speedups of about an order of magnitude with push-button nonlinear macromodel-generation algorithms.
Random Sampling of Moment Graph: A Stochastic Krylov-Reduction Algorithm In this paper we introduce a new algorithm for model order reduction in the presence of parameter or process variation. Our analysis is performed using a graph interpretation of the multi-parameter moment matching approach, leading to a computational technique based on random sampling of moment graph (RSMG). Using this technique, we have developed a new algorithm that combines the best aspects of recently proposed parameterized moment-matching and approximate TBR procedures. RSMG attempts to avoid both exponential growth of computational complexity and multiple matrix factorizations, the primary drawbacks of existing methods, and illustrates good ability to tailor algorithms to apply computational effort where needed. Industry examples are used to verify our new algorithms
Efficient Reduced-Order Modeling of Frequency-Dependent Coupling Inductances associated with 3-D Interconnect Structures Since the first papers on asymptotic waveform evaluation (AWE), reduced order models have become standard for improving interconnect simulation efficiency, and very recent work has demonstrated that bi-orthogonalization algorithms can be used to robustly generate AWE-style macromodels. In this paper we describe using block Arnoldi-based orthogonalization methods to generate reduced order models from FastHenry, a multipole-accelerated three dimensional inductance extraction program. Examples are analyzed to demonstrate the efficiency and accuracy of the block Arnoldi algorithm.
Rough and ready error estimates in Gaussian integration of analytic functions Two expressions are derived for use in estimating the error in the numerical integration of analytic functions in terms of the maximum absolute value of the function in an appropriate region of regularity. These expressions are then specialized to the case of Gaussian integration rules, and the resulting error estimates are compared with those obtained by the use of tables of error coefficients.
Efficient computation of global sensitivity indices using sparse polynomial chaos expansions Global sensitivity analysis aims at quantifying the relative importance of uncertain input variables onto the response of a mathematical model of a physical system. ANOVA-based indices such as the Sobol’ indices are well-known in this context. These indices are usually computed by direct Monte Carlo or quasi-Monte Carlo simulation, which may reveal hardly applicable for computationally demanding industrial models. In the present paper, sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices. An adaptive algorithm allows the analyst to build up a PC-based metamodel that only contains the significant terms whereas the PC coefficients are computed by least-square regression using a computer experimental design. The accuracy of the metamodel is assessed by leave-one-out cross validation. Due to the genuine orthogonality properties of the PC basis, ANOVA-based sensitivity indices are post-processed analytically. This paper also develops a bootstrap technique which eventually yields confidence intervals on the results. The approach is illustrated on various application examples up to 21 stochastic dimensions. Accurate results are obtained at a computational cost 2–3 orders of magnitude smaller than that associated with Monte Carlo simulation.
Strategies for Mobile Broadband Growth: Traffic Segmentation for Better Customer Experience With mobile terminals becoming the primary Internet devices for most of the people in the world and smartphone users generating on average 10 times more traffic than other users, it is critical that operators adopt smarter segmentation and pricing strategies. This paper analyzes how the deployment of QoS, with traffic segmentation by users and by services, may help service providers to achieve these objectives, save costs and go beyond the existing pricing models, while taking into account the new facets of mobile Internet usage patterns. In the analyzed cases, using the proposed QoS functions, simulation results for 3GPP High Speed Packet Access (HSPA) showed a spectral efficiency gain up to circa 30%; or, equivalently, a potential sites saving of about 22%, for serving the same amount of satisfied traffic on the best effort (BE) with capacity over provisioning. Extra benefits and better user experience are expected from the deployment of 3GPP HSPA+ and 3GPP Long Term Evolution (LTE) systems.
TEMPORAL AND SPATIAL SCALING FOR STEREOSCOPIC VIDEO COMPRESSION In stereoscopic video, it is well-known that compression efficiency can be improved, without sacrificing PSNR, by predicting one view from the other. Moreover, additional gain can be achieved by subsampling one of the views, since the Human Visual System can perceive high frequency information from the other view. In this work, we propose subsampling of one of the views by scaling its temporal rate and/or spatial size at regular intervals using a real-time stereoscopic H.264/AVC codec, and assess the subjective quality of the resulting videos using DSCQS test methodology. We show that stereoscopic videos can be coded at a rate about 1.2 times that of monoscopic videos with little visual quality degradation.
Enhancing Video Accessibility and Availability Using Information-Bound References Users are often frustrated when they cannot view video links shared via blogs, social networks, and shared bookmark sites on their devices or suffer performance and usability problems when doing so. While other versions of the same content better suited to their device and network constraints may be available on other third-party hosting sites, these remain unusable because users cannot efficiently discover these and verify that these variants match the content publisheru0027s original intent. Our vision is to enable consumers to leverage verifiable alternatives from different hosting sites that are best suited to their constraints to deliver a high quality of experience and enable content publishers to reach a wide audience with diverse operating conditions with minimal upfront costs. To this end, we make a case for information-bound references or IBRs that bind references to video content to the underlying information that a publisher wants to convey, decoupled from details such as protocols, hosts, file names, or the underlying bits. This paper addresses key challenges in the design and implementation of IBR generation and resolution mechanisms, and presents an evaluation of the benefits IBRs offer.
1.113035
0.105057
0.105057
0.035019
0.011648
0.00358
0.000695
0.000243
0.000045
0.000005
0
0
0
0
The Wiener--Askey Polynomial Chaos for Stochastic Differential Equations We present a new method for solving stochastic differential equations based on Galerkin projections and extensions of Wiener's polynomial chaos. Specifically, we represent the stochastic processes with an optimum trial basis from the Askey family of orthogonal polynomials that reduces the dimensionality of the system and leads to exponential convergence of the error. Several continuous and discrete processes are treated, and numerical examples show substantial speed-up compared to Monte Carlo simulations for low dimensional stochastic inputs.
Design of Smart MVDC Power Grid Protection Improved reliability and safety of the medium-voltage dc power distribution systems on board of all electric ships are the objectives of this paper. The authors propose the integration of the self-healing capability against faults of the measurement system in power system fault detection and protection systems. While most of previous work in the literature focuses on either one aspect independently, here, the two are integrated. On one hand, our approach addresses also the case of concurrent power system fault and measurement system fault. On the other hand, the proposed approach must be capable of distinguishing between the two types of failure. The proposed architecture is based on exchange of information between energy conversion and measurement devices. This makes the impact of communication delays critical, so its analysis is provided for the proposed case study. The impact on the performance of the measurement validation and protection systems is derived and can provide hints on the design. The protection method used as case study consists in controlling power converters to ride through the power system fault while maintaining power supply to the vital loads. To overcome failures of the measurement system, invalid data were detected and reconstructed through their expected value.
A Design Approach For Digital Controllers Using Reconfigurable Network-Based Measurements In this paper, the authors propose and analyze a network-based control architecture for power-electronics-building-block-based converters. The objective of the proposed approach is to distribute the control system to guarantee maximum flexibility in the control of power distribution. In the proposed control system, controller and controlled devices are connected through the network, which affects the measurement and control signals mostly due to delays. The main goal of this work is to outline a design methodology for controllers operating with measurements coming from a network. The approach proposed here assesses the robustness of the control system in the presence of delay and aims to design an optimal controller for robustness against network delays. This methodology is based on uncertainty analysis and assumes that the delays are the main element of uncertainty in the system. The theoretical foundations of this approach are discussed, together with the simulation and implementation of a physical laboratory prototype.
Sensitivity Analysis for Oscillating Dynamical Systems Boundary value formulations are presented for exact and efficient sensitivity analysis, with respect to model parameters and initial conditions, of different classes of oscillating systems. Methods for the computation of sensitivities of derived quantities of oscillations such as period, amplitude, and different types of phases are first developed for limit-cycle oscillators. In particular, a novel decomposition of the state sensitivities into three parts is proposed to provide an intuitive classification of the influence of parameter changes on period, amplitude, and relative phase. The importance of the choice of time reference, i.e., the phase locking condition, is demonstrated and discussed, and its influence on the sensitivity solution is quantified. The methods are then extended to other classes of oscillatory systems in a general formulation. Numerical techniques are presented to facilitate the solution of the boundary value problem and the computation of different types of sensitivities. Numerical results are verified by demonstrating consistency with finite difference approximations and are superior both in computational efficiency and in numerical precision to existing partial methods.
Epidemic models with random coefficients Mathematical models are very important in epidemiology. Many of the models are given by differential equations and most consider that the parameters are deterministic variables. But in practice, these parameters have large variability that depends on the measurement method and its inherent error, on differences in the actual population sample size used, as well as other factors that are difficult to account for. In this paper the parameters that appear in SIR and SIRS epidemic model are considered random variables with specified distributions. A stochastic spectral representation of the parameters is used, together with the polynomial chaos method, to obtain a system of differential equations, which is integrated numerically to obtain the evolution of the mean and higher-order moments with respect to time.
Numerical study of uncertainty quantification techniques for implicit stiff systems Galerkin polynomial chaos and collocation methods have been widely adopted for uncertainty quantification purpose. However, when the stiff system is involved, the computational cost can be prohibitive, since stiff numerical integration requires the solution of a nonlinear system of equations at every time step. Applying the Galerkin polynomial chaos to stiff system will cause a computational cost increase from O(n3) to O(S3n3). This paper explores uncertainty quantification techniques for stiff chemical systems using Galerkin polynomial chaos, collocation and collocation least-square approaches. We propose a modification in the implicit time stepping process. The numerical test results show that with the modified approach, the run time of the Galerkin polynomial chaos is reduced. We also explore different methods of choosing collocation points in collocation implementations and propose a collocation least-square approach. We conclude that the collocation least-square for uncertainty quantification is at least as accurate as the Galerkin approach, and is more efficient with a well-chosen set of collocation points.
A Dddas Framework For Volcanic Ash Propagation And Hazard Analysis In this paper we will present early work on using a DDDAS based approach to the construction of probabilistic estimates of volcanic ash transport and dispersal. Our primary modeling tools will be a combination of a plume eruption model BENT and the ash transport model PUFF. Data from satellite imagery, observation of vent parameters and windfields will drive our simulations. We will use uncertainty quantification methodology - polynomial chaos quadrature in combination with data integration to complete the DDDAS loop.
A stochastic particle-mesh scheme for uncertainty propagation in vortical flows A new mesh-particle scheme is constructed for uncertainty propagation in vortical flow. The scheme is based on the incorporation of polynomial chaos (PC) expansions into a Lagrangian particle approximation of the Navier–Stokes equations. The main idea of the method is to use a unique set of particles to transport the stochastic modes of the solution. The particles are transported by the mean velocity field, while their stochastic strengths are updated to account for diffusive and convective effects induced by the coupling between stochastic modes. An integral treatment is used for the evaluation of the coupled stochastic terms, following the framework of the particle strength exchange (PSE) methods, which yields a conservative algorithm. It is also shown that it is possible to apply solution algorithms used in deterministic setting, including particle-mesh techniques and particle remeshing. Thus, the method combines the advantages of particles discretizations with the efficiency of PC representations. Validation of the method on uncertain diffusion and convection problems is first performed. An example is then presented of natural convection of a hot patch of fluid in infinite domain, and the computations are used to illustrate the effectiveness of the approach for both large number of particles and high-order PC expansions.
Discontinuity detection in multivariate space for stochastic simulations Edge detection has traditionally been associated with detecting physical space jump discontinuities in one dimension, e.g. seismic signals, and two dimensions, e.g. digital images. Hence most of the research on edge detection algorithms is restricted to these contexts. High dimension edge detection can be of significant importance, however. For instance, stochastic variants of classical differential equations not only have variables in space/time dimensions, but additional dimensions are often introduced to the problem by the nature of the random inputs. The stochastic solutions to such problems sometimes contain discontinuities in the corresponding random space and a prior knowledge of jump locations can be very helpful in increasing the accuracy of the final solution. Traditional edge detection methods typically require uniform grid point distribution. They also often involve the computation of gradients and/or Laplacians, which can become very complicated to compute as the number of dimensions increases. The polynomial annihilation edge detection method, on the other hand, is more flexible in terms of its geometric specifications and is furthermore relatively easy to apply. This paper discusses the numerical implementation of the polynomial annihilation edge detection method to high dimensional functions that arise when solving stochastic partial differential equations.
Spectral Representation and Reduced Order Modeling of the Dynamics of Stochastic Reaction Networks via Adaptive Data Partitioning Dynamical analysis tools are well established for deterministic models. However, for many biochemical phenomena in cells the molecule count is low, leading to stochastic behavior that causes deterministic macroscale reaction models to fail. The main mathematical framework representing these phenomena is based on jump Markov processes that model the underlying stochastic reaction network. Conventional dynamical analysis tools do not readily generalize to the stochastic setting due to nondifferentiability and absence of explicit state evolution equations. We developed a reduced order methodology for dynamical analysis that relies on the Karhunen-Loève decomposition and polynomial chaos expansions. The methodology relies on adaptive data partitioning to obtain an accurate representation of the stochastic process, especially in the case of multimodal behavior. As a result, a mixture model is obtained that represents the reduced order dynamics of the system. The Schlögl model is used as a prototype bistable process that exhibits time scale separation and leads to multimodality in the reduced order model.
Robust Estimation of Timing Yield with Partial Statistical Information on Process Variations This paper illustrates the application of distributional robustness theory to compute the worst-case timing yield of a circuit. Our assumption is that the probability distribution of process variables are unknown and only the intervals of the process variables and their class of distributions are available. We consider two practical classes to group potential distributions. We then derive conditions that allow applying the results of the distributional robustness theory to efficiently and accurately estimate the worst-case timing yield for each class. Compared to other recent works, our approach can model correlations among process variables and does not require knowledge of exact function form of the joint distribution function of process variables. While our emphasis is on robust timing yield estimation, our approach is also applicable to other types of parametric yield.
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Sparse Signal Reconstruction via Iterative Support Detection We present a novel sparse signal reconstruction method, iterative support detection (ISD), aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical $\ell_1$ minimization approach. ISD addresses failed reconstructions of $\ell_1$ minimization due to insufficient measurements. It estimates a support set $I$ from a current reconstruction and obtains a new reconstruction by solving the minimization problem $\min\{\sum_{i\notin I}|x_i|:Ax=b\}$, and it iterates these two steps for a small number of times. ISD differs from the orthogonal matching pursuit method, as well as its variants, because (i) the index set $I$ in ISD is not necessarily nested or increasing, and (ii) the minimization problem above updates all the components of $x$ at the same time. We generalize the null space property to the truncated null space property and present our analysis of ISD based on the latter. We introduce an efficient implementation of ISD, called threshold-ISD, for recovering signals with fast decaying distributions of nonzeros from compressive sensing measurements. Numerical experiments show that threshold-ISD has significant advantages over the classical $\ell_1$ minimization approach, as well as two state-of-the-art algorithms: the iterative reweighted $\ell_1$ minimization algorithm (IRL1) and the iterative reweighted least-squares algorithm (IRLS). MATLAB code is available for download from http://www.caam.rice.edu/ optimization/L1/ISD/.
Compressed sensing over finite fields We develop compressed sensing results for sources drawn from finite alphabets. We apply tools from linear coding and large deviations. We establish strong connections between our results and error exponents of lossless source coding in the case of no measurement noise, and modified channel coding error exponents in the case of measurement noise. We connect to standard results on compressed sensing in the real field.
1.000492
0.001193
0.001193
0.000868
0.000804
0.000683
0.000527
0.000455
0.000398
0.000307
0.00005
0
0
0
Smolyak cubature of given polynomial degree with few nodes for increasing dimension Summary.   Some recent investigations (see e.g., Gerstner and Griebel [5], Novak and Ritter [9] and [10], Novak, Ritter and Steinbauer [11], Wasilkowski and Woźniakowski [18] or Petras [13]) show that the so-called Smolyak algorithm applied to a cubature problem on the d-dimensional cube seems to be particularly useful for smooth integrands. The problem is still that the numbers of nodes grow (polynomially but) fast for increasing dimensions. We therefore investigate how to obtain Smolyak cubature formulae with a given degree of polynomial exactness and the asymptotically minimal number of nodes for increasing dimension d and obtain their characterization for a subset of Smolyak formulae. Error bounds and numerical examples show their good behaviour for smooth integrands. A modification can be applied successfully to problems of mathematical finance as indicated by a further numerical example.
On Positivity of Polynomials: The Dilation Integral Method The focal point of this paper is the well known problem of polynomial positivity over a given domain. More specifically, we consider a multivariate polynomial f ( x ) with parameter vector x restricted to a hypercube X sub R n . The objective is to determine if f ( x ) &gt; 0 for all x isin X . Motivated by NP-Hardness considerations, we introduce the so-called dilation integral method. Using this method, a ldquosofteningrdquo of this problem is described. That is, rather than insisting that f ( x ) be positive for all x isin X , we consider the notions of practical positivity and practical non-positivity. As explained in the paper, these notions involve the calculation of a quantity epsiv &gt; 0 which serves as an upper bound on the percentage volume of violation in parameter space where f ( x ) les 0 . Whereas checking the polynomial positivity requirement may be computationally prohibitive, using our epsiv-softening and associated dilation integrals, computations are typically straightforward. One highlight of this paper is that we obtain a sequence of upper bounds epsiv k which are shown to be ldquosharprdquo in the sense that they converge to zero whenever the positivity requirement is satisfied. Since for fixed n , computational difficulties generally increase with k , this paper also focuses on results which reduce the size of the required k in order to achieve an acceptable percentage volume certification level. For large classes of problems, as the dimension of parameter space n grows, the required k value for acceptable percentage volume violation may be quite low. In fact, it is often the case that low volumes of violation can be achieved with values as low as k =2.
Exact Solution Of Uncertain Convex Optimization Problems This paper proposes a novel approach for the solution of a wide class of convex programs characterized by the presence of bounded stochastic uncertainty. The data of the problem is assumed to depend polynomially on a vector of uncertain parameters q is an element of R-d, uniformly distributed in a box, and the solution should minimize the expected value of the cost function with respect to q. The proposed methodology is based on a combination of low-order quadrature formulae, which allow for the construction of a cubature rule with high degree of exactness and low number of nodes. The algorithm is shown to depend polynomially on the problem dimension d. A specific application to uncertain least-squares problems, along with a numerical example, concludes the paper.
Numerical quadrature for high-dimensional singular integrals over parallelotopes We introduce and analyze a family of algorithms for an efficient numerical approximation of integrals of the form I=@!"C"^"("^"1"^")@!"C"^"("^"2"^")F(x,y,y-x)dydx where C^(^1^), C^(^2^) are d-dimensional parallelotopes (i.e. affine images of d-hypercubes) and F has a singularity at y-x=0. Such integrals appear in Galerkin discretization of integral operators in R^d. We construct a family of quadrature rules Q"N with N function evaluations for a class of integrands F which may have algebraic singularities at y-x=0 and are Gevrey-@d regular for y-x0. The main tool is an explicit regularizing coordinate transformation, simultaneously simplifying the singular support and the domain of integration. For the full tensor product variant of the suggested quadrature family we prove that Q"N achieves the exponential convergence rate O(exp(-rN^@c)) with the exponent @c=1/(2d@d+1). In the special case of a singularity of the form @?y-x@?^@a with real @a we prove that the improved convergence rate of @c=1/(2d@d) is achieved if a certain modified one-dimensional Gauss-Jacobi quadrature rule is used in the singular direction. We give numerical results for various types of the quadrature rules, in particular based on tensor product rules, standard (Smolyak), optimized and adaptive sparse grid quadratures and Sobol' sequences.
Cubature formulas for symmetric measures in higher dimensions with few points We study cubature formulas for d-dimensional integrals with an arbitrary symmetric weight function of product form. We present a construction that yields a high polynomial exactness: for. fixed degree l = 5 or l = 7 and large dimension d the number of knots is only slightly larger than the lower bound of Moller and much smaller compared to the known constructions. We also show, for any odd degree l = 2k + 1, that the minimal number of points is almost independent of the weight function. This is also true for the integration over the (Euclidean) sphere. u
Fast calculation of coefficients in the Smolyak algorithm For many numerical problems involving smooth multivariate functions on d-cubes, the so-called Smolyak algorithm (or Boolean method, sparse grid method, etc.) has proved to be very useful. The final form of the algorithm (see equation (12) below) requires functional evaluation as well as the computation of coefficients. The latter can be done in different ways that may have considerable influence on the total cost of the algorithm. In this paper, we try to diminish this influence as far as possible. For example, we present an algorithm for the integration problem that reduces the time for the calculation and exposition of the coefficients in such a way that for increasing dimension, this time is small compared to dn, where n is the number of involved function values.
Uncertainty quantification in simulations of power systems: Multi-element polynomial chaos methods While probabilistic methods have been used extensively in simulating stationary power systems, there has not been a systematic effort in developing suitable algorithms for stochastic simulations of time-dependent and reconfiguring power systems. Here, we present several versions of polynomial chaos that lead to a very efficient approach especially in low dimensions. We consider both Galerkin and Collocation projections, and demonstrate how the multi-element decomposition of random space leads to effective resolution of stochastic discontinuous solutions. A comprehensive comparison is presented for prototype differential equations and for two electromechanical systems used in an electric ship.
Nonlinear stochastic model predictive control via regularized polynomial chaos expansions A new method to control stochastic systems in the presence of input and state constraints is presented. The method exploits a particular receding horizon algorithm, coupled with Polynomial Chaos Expansions (PCEs). It is shown that the proposed approach achieves closed loop convergence and satisfaction of state constraints in expectation. Moreover, a non-intrusive method to compute the PCEs' coefficients is proposed, exploiting ℓ2-norm regularization with a particular choice of weighting matrices. The method requires low computational effort, and it can be applied to general nonlinear systems without the need to manipulate the model. The approach is tested on a nonlinear electric circuit example.
Uncertainty quantification of limit-cycle oscillations Different computational methodologies have been developed to quantify the uncertain response of a relatively simple aeroelastic system in limit-cycle oscillation, subject to parametric variability. The aeroelastic system is that of a rigid airfoil, supported by pitch and plunge structural coupling, with nonlinearities in the component in pitch. The nonlinearities are adjusted to permit the formation of a either a subcritical or supercritical branch of limit-cycle oscillations. Uncertainties are specified in the cubic coefficient of the torsional spring and in the initial pitch angle of the airfoil. Stochastic projections of the time-domain and cyclic equations governing system response are carried out, leading to both intrusive and non-intrusive computational formulations. Non-intrusive formulations are examined using stochastic projections derived from Wiener expansions involving Haar wavelet and B-spline bases, while Wiener-Hermite expansions of the cyclic equations are employed intrusively and non-intrusively. Application of the B-spline stochastic projection is extended to the treatment of aerodynamic nonlinearities, as modeled through the discrete Euler equations. The methodologies are compared in terms of computational cost, convergence properties, ease of implementation, and potential for application to complex aeroelastic systems.
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K/logN)-r, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed
A provably passive and cost-efficient model for inductive interconnects To reduce the model complexity for inductive interconnects, the vector potential equivalent circuit (VPEC) model was introduced recently and a localized VPEC model was developed based on geometry integration. In this paper, the authors show that the localized VPEC model is not accurate for interconnects with nontrivial sizes. They derive an accurate VPEC model by inverting the inductance matrix under the partial element equivalent circuit (PEEC) model and prove that the effective resistance matrix under the resulting full VPEC model is passive and strictly diagonal dominant. This diagonal dominance enables truncating small-valued off-diagonal elements to obtain a sparsified VPEC model named truncated VPEC (tVPEC) model with guaranteed passivity. To avoid inverting the entire inductance matrix, the authors further present another sparsified VPEC model with preserved passivity, the windowed VPEC (wVPEC) model, based on inverting a number of inductance submatrices. Both full and sparsified VPEC models are SPICE compatible. Experiments show that the full VPEC model is as accurate as the full PEEC model but consumes less simulation time than the full PEEC model does. Moreover, the sparsified VPEC model is orders of magnitude (1000×) faster and produces a waveform with small errors (3%) compared to the full PEEC model, and wVPEC uses less (up to 90×) model building time yet is more accurate compared to the tVPEC model.
Presumption and prejudice in logical inference Two nonstandard modes of inference, confirmation and denial, have been shown by Bandler and Kohout to be valid in fuzzy propositional and predicate logics. If denial is used in combination with modus ponens, the resulting inference mode (“augmented modus ponens”) yields more precise bounds on the consequent of an implication than are usually called for in approximate reasoning. Similar results hold for augmented modus tollens constructed from confirmation and conventional fuzzy modus tollens. Two simpler modes of inference, presumption and prejudice, are also valid under the same assumptions as confirmation and denial. Prejudice imposes an upper bound on the truth value of the consequent of a fuzzy implication regardless of the truth value of the antecedent; presumption imposes a lower bound on the truth value of the antecedent regardless of that of the consequent. Some of the consequences of presumption and prejudice cast doubt on the suitability of fuzzy propositional and predicate logics for use in expert systems that are designed to process real-world data. A logic based directly on fuzzy sets is explored as an alternative. Fuzzy set logic supports fuzzy modus ponens and modus tollens but does not entail the more problematic modes of confirmation, denial, presumption, and prejudice. However, some of the expressive power derivable from the diversity of fuzzy propositional logics and their derivative fuzzy predicate logics is lost.
Compressed sensing of color images This work proposes a method for color imaging via compressive sampling. Random projections from each of the color channels are acquired separately. The problem is to reconstruct the original color image from the randomly projected (sub-sampled) data. Since each of the color channels are sparse in some domain (DCT, Wavelet, etc.) one way to approach the reconstruction problem is to apply sparse optimization algorithms. We note that the color channels are highly correlated and propose an alternative reconstruction method based on group sparse optimization. Two new non-convex group sparse optimization methods are proposed in this work. Experimental results show that incorporating group sparsity into the reconstruction problem produces significant improvement (more than 1dB PSNR) over ordinary sparse algorithm.
A game-theoretic multipath routing for video-streaming services over Mobile Ad Hoc Networks The number of portable devices capable of maintaining wireless communications has increased considerably in the last decade. Such mobile nodes may form a spontaneous self-configured network connected by wireless links to constitute a Mobile Ad Hoc Network (MANET). As the number of mobile end users grows the demand of multimedia services, such as video-streaming, in such networks is envisioned to increase as well. One of the most appropriate video coding technique for MANETs is layered MPEG-2 VBR, which used with a proper multipath routing scheme improves the distribution of video streams. In this article we introduce a proposal called g-MMDSR (game theoretic-Multipath Multimedia Dynamic Source Routing), a cross-layer multipath routing protocol which includes a game theoretic approach to achieve a dynamic selection of the forwarding paths. The proposal seeks to improve the own benefits of the users whilst using the common scarce resources efficiently. It takes into account the importance of the video frames in the decoding process, which outperforms the quality of the received video. Our scheme has proved to enhance the performance of the framework and the experience of the end users. Simulations have been carried out to show the benefits of our proposal under different situations where high interfering traffic and mobility of the nodes are present.
1.015956
0.014902
0.014672
0.011765
0.004532
0.00149
0.000388
0.00009
0.000013
0
0
0
0
0
An improved MULTIMOORA approach for multi-criteria decision-making based on interdependent inputs of simplified neutrosophic linguistic information. Multi-objective optimization by ratio analysis plus the full multiplicative form (MULTIMOORA) is a useful method to apply in multi-criteria decision-making due to the flexibility and robustness it introduces into the decision process. This paper defines several simplified neutrosophic linguistic distance measures and employs a distance-based method to determine criterion weights. Then, an improved MULTIMOORA approach is presented by integrating the simplified neutrosophic linguistic normalized weighted Bonferroni mean and simplified neutrosophic linguistic normalized geometric weighted Bonferroni mean operators as well as a simplified neutrosophic linguistic distance measure. This approach ranks alternatives according to three ordering methods, and then, uses dominance theory to combine the three rankings into a single ranking. Finally, this paper presents a practical case example and conducts a comparative analysis between the proposed approach and existing methods in order to verify the feasibility and effectiveness of the developed methodology.
Interval type-2 fuzzy sets to model linguistic label perception in online services satisfaction In this paper, we propose a novel two-phase methodology based on interval type-2 fuzzy sets (T2FSs) to model the human perceptions of the linguistic terms used to describe the online services satisfaction. In the first phase, a type-1 fuzzy set (T1FS) model of an individual's perception of the terms used in rating user satisfaction is derived through a decomposition-based procedure. The analysis is carried out by using well-established metrics and results from the Social Sciences context. In the second phase, interval T2FS models of online user satisfaction are calculated using a similarity-based data mining procedure. The procedure selects an essential and informative subset of the initial T1FSs that is used to discard the outliers automatically. Resulting interval T2FSs, which are synthesized based on the selected subset of T1FSs only, exhibit reasonable shapes and interpretability.
Some new distance measures for type-2 fuzzy sets and distance measure based ranking for group decision making problems In this paper, we propose some distance measures between type-2 fuzzy sets, and also a new family of utmost distance measures are presented. Several properties of different proposed distance measures have been introduced. Also, we have introduced a new ranking method for the ordering of type-2 fuzzy sets based on the proposed distance measure. The proposed ranking method satisfies the reasonable properties for the ordering of fuzzy quantities. Some properties such as robustness, order relation have been presented. Limitations of existing ranking methods have been studied. Further for practical use, a new method for selecting the best alternative, for group decision making problems is proposed. This method is illustrated with a numerical example.
Frank Aggregation Operators for Triangular Interval Type-2 Fuzzy Set and Its Application in Multiple Attribute Group Decision Making. This paper investigates an approach to multiple attribute group decision-making (MAGDM) problems, in which the individual assessments are in the form of triangle interval type-2 fuzzy numbers (TIT2FNs). Firstly, some Frank operation laws of triangle interval type-2 fuzzy set (TIT2FS) are defined. Secondly, some Frank aggregation operators such as the triangle interval type-2 fuzzy Frank weighted averaging (TIT2FFWA) operator and the triangle interval type-2 fuzzy Frank weighted geometric (TIT2FFWG) operator are developed for aggregation TIT2FNs. Furthermore, some desirable properties of the two aggregation operators are analyzed in detail. Finally, an approach based on TIT2FFWA (or TIT2FFWG) operator to solve MAGDM is developed. An illustrative example about supplier selection is provided to illustrate the developed procedures. The results demonstrate the practicality and effectiveness of our new method.
Multi-Attribute Group Decision Making Methods With Proportional 2-Tuple Linguistic Assessments And Weights The proportional 2-tuple linguistic model provides a tool to deal with linguistic term sets that are not uniformly and symmetrically distributed. This study further develops multi-attribute group decision making methods with linguistic assessments and linguistic weights, based on the proportional 2-tuple linguistic model. Firstly, this study defines some new operations in proportional 2-tuple linguistic model, including weighted average aggregation operator with linguistic weights, ordered weighted average operator with linguistic weights and the distance between proportional linguistic 2-tuples. Then, four multi-attribute group decision making methods are presented. They are the method based on the proportional 2-tuple linguistic aggregation operator, technique for order preference by similarity to ideal solution (TOPSIS) with proportional 2-tuple linguistic information, elimination et choice translating reality (ELECTRE) with proportional 2-tuple linguistic information, preference ranking organization methods for enrichment evaluations (PROMETHEE) with proportional 2-tuple linguistic information. Finally, an example is given to illustrate the effectiveness of the proposed methods.
A comparative study of ranking methods, similarity measures and uncertainty measures for interval type-2 fuzzy sets Ranking methods, similarity measures and uncertainty measures are very important concepts for interval type-2 fuzzy sets (IT2 FSs). So far, there is only one ranking method for such sets, whereas there are many similarity and uncertainty measures. A new ranking method and a new similarity measure for IT2 FSs are proposed in this paper. All these ranking methods, similarity measures and uncertainty measures are compared based on real survey data and then the most suitable ranking method, similarity measure and uncertainty measure that can be used in the computing with words paradigm are suggested. The results are useful in understanding the uncertainties associated with linguistic terms and hence how to use them effectively in survey design and linguistic information processing.
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
SPARSE OPTIMIZATION WITH LEAST-SQUARES CONSTRAINTS The use of convex optimization for the recovery of sparse signals from incomplete or compressed data is now common practice. Motivated by the success of basis pursuit in recovering sparse vectors, new formulations have been proposed that take advantage of different types of sparsity. In this paper we propose an efficient algorithm for solving a general class of sparsifying formulations. For several common types of sparsity we provide applications, along with details on how to apply the algorithm, and experimental results.
Aging analysis at gate and macro cell level Aging, which can be regarded as a time-dependent variability, has until recently not received much attention in the field of electronic design automation. This is changing because increasing reliability costs threaten the continued scaling of ICs. We investigate the impact of aging effects on single combinatorial gates and present methods that help to reduce the reliability costs by accurately analyzing the performance degradation of aged circuits at gate and macro cell level.
Sensing increased image resolution using aperture masks We present a technique to construct increased-resolution images from multiple photos taken without moving the cam- era or the sensor. Like other super-resolution techniques, we capture and merge multiple images, but instead of mov- ing the camera sensor by sub-pixel distances for each im- age, we change masks in the lens aperture and slightly de- focus the lens. The resulting capture system is simpler, and tolerates modest mask registration errors well. We present a theoretical analysis of the camera and image merging method, show both simulated results and actual results from a crudely modified consumer camera, and compare its re- sults to robust 'blind' methods that rely on uncontrolled camera displacements.
Modelling heterogeneity among experts in multi-criteria group decision making problems Heterogeneity in group decision making problems has been recently studied in the literature. Some instances of these studies include the use of heterogeneous preference representation structures, heterogeneous preference representation domains and heterogeneous importance degrees. On this last heterogeneity level, the importance degrees are associated to the experts regardless of what is being assessed by them, and these degrees are fixed through the problem. However, there are some situations in which the experts' importance degrees do not depend only on the expert. Sometimes we can find sets of heterogeneously specialized experts, that is, experts whose knowledge level is higher on some alternatives and criteria than it is on any others. Consequently, their importance degree should be established in accordance with what is being assessed. Thus, there is still a gap on heterogeneous group decision making frameworks to be studied. We propose a new fuzzy linguistic multi-criteria group decision making model which considers different importance degrees for each expert depending not only on the alternatives but also on the criterion which is taken into account to evaluate them.
Generalized Boolean Methods of Information Retrieval In most operational information retrieval systems the Standard retrieval methods based on set theory and binary logic are used. These methods would be much more attractive if they could be extended to include the importance of various index terms in document representations and search request formulations, in addition to a weighting mechanism which could be applied to rank the retrieved documents. This observation has been widely recognized in the literature as such extended retrieval methods could provide the precision of a Boolean search and the advantages of a ranked output. However, a closer examination of all the reported work reveals that up to the present the only possible approach of sufficient consistency and rigorousness is that based on recently developed fuzzy set theory and fuzzy logic. As the concept of a fuzzy set is a generalization of the conventional notion of a set, the generalization of the information retrieval methods based on set theory and binary logic can be derived in a natural way. The present paper describes such generalized Boolean information retrieval methods. The presentation of each includes an outline of its advantages and disadvan-tages, and the relationships between each particular method and the corresponding Standard information retrieval method based on set theory and binary logic are also discussed. It has been shown that these Standard retrieval methods are particular cases of information retrieval methods based on the theory of fuzzy sets and fuzzy logic. The considerations concerning the information retrieval methods presented are illustrated by simple examples.
Sparse Matrix Recovery from Random Samples via 2D Orthogonal Matching Pursuit Since its emergence, compressive sensing (CS) has attracted many researchers' attention. In the CS, recovery algorithms play an important role. Basis pursuit (BP) and matching pursuit (MP) are two major classes of CS recovery algorithms. However, both BP and MP are originally designed for one-dimensional (1D) sparse signal recovery, while many practical signals are two-dimensional (2D), e.g. image, video, etc. To recover 2D sparse signals effectively, this paper develops the 2D orthogonal MP (2D-OMP) algorithm, which shares the advantages of low complexity and good performance. The 2D-OMP algorithm can be widely used in those scenarios involving 2D sparse signal processing, e.g. image/video compression, compressive imaging, etc.
1.12
0.024
0.02
0.006667
0.001429
0.000625
0.000015
0
0
0
0
0
0
0
An Algorithm for Intelligibility Prediction of Time–Frequency Weighted Noisy Speech In the development process of noise-reduction algorithms, an objective machine-driven intelligibility measure which shows high correlation with speech intelligibility is of great interest. Besides reducing time and costs compared to real listening experiments, an objective intelligibility measure could also help provide answers on how to improve the intelligibility of noisy unprocessed speech. In this paper, a short-time objective intelligibility measure (STOI) is presented, which shows high correlation with the intelligibility of noisy and time-frequency weighted noisy speech (e.g., resulting from noise reduction) of three different listening experiments. In general, STOI showed better correlation with speech intelligibility compared to five other reference objective intelligibility models. In contrast to other conventional intelligibility models which tend to rely on global statistics across entire sentences, STOI is based on shorter time segments (386 ms). Experiments indeed show that it is beneficial to take segment lengths of this order into account. In addition, a free Matlab implementation is provided.
Monaural speech segregation based on fusion of source-driven with model-driven techniques In this paper by exploiting the prevalent methods in speech coding and synthesis, a new single channel speech segregation technique is presented. The technique integrates a model-driven method with a source-driven method to take advantage of both individual approaches and reduce their pitfalls significantly. We apply harmonic modelling in which the pitch and spectrum envelope are the main components for the analysis and synthesis stages. Pitch values of two speakers are obtained by using a source-driven method. The spectrum envelope, is obtained by using a new model-driven technique consisting of four components: a trained codebook of the vector quantized envelopes (VQ-based separation), a mixture-maximum approximation (MIXMAX), minimum mean square error estimator (MMSE), and a harmonic synthesizer. In contrast with previous model-driven techniques, this approach is speaker independent and can separate out the unvoiced regions as well as suppress the crosstalk effect which both are the drawbacks of source-driven or equivalently computational auditory scene analysis (CASA) models. We compare our fused model with both model- and source-driven techniques by conducting subjective and objective experiments. The results show that although for the speaker-dependent case, model-based separation delivers the best quality, for a speaker independent scenario the integrated model outperforms the individual approaches. This result supports the idea that the human auditory system takes on both grouping cues (e.g., pitch tracking) and a priori knowledge (e.g., trained quantized envelopes) to segregate speech signals.
CASA-Based Robust Speaker Identification Conventional speaker recognition systems perform poorly under noisy conditions. Inspired by auditory perception, computational auditory scene analysis (CASA) typically segregates speech by producing a binary time–frequency mask. We investigate CASA for robust speaker identification. We first introduce a novel speaker feature, gammatone frequency cepstral coefficient (GFCC), based on an auditory periphery model, and show that this feature captures speaker characteristics and performs substantially better than conventional speaker features under noisy conditions. To deal with noisy speech, we apply CASA separation and then either reconstruct or marginalize corrupted components indicated by a CASA mask. We find that both reconstruction and marginalization are effective. We further combine the two methods into a single system based on their complementary advantages, and this system achieves significant performance improvements over related systems under a wide range of signal-to-noise ratios.
A sparse representation approach for perceptual quality improvement of separated speech Speech separation based on time-frequency masking has been shown to improve intelligibility of speech signals corrupted by noise. A perceived weakness of binary masking is the quality of separated speech. In this paper, an approach for improving the perceptual quality of separated speech from binary masking is proposed. Our approach consists of two stages, where a binary mask is generated in the first stage that effectively performs speech separation. In the second stage, a sparse-representation approach is used to represent the separated signal by a linear combination of Short-time Fourier Transform (STFT) magnitudes that are generated from a clean speech dictionary. Overlap-and-add synthesis is then used to generate an estimate of the speech signal. The performance of the proposed approach is evaluated with the Perceptual Evaluation of Speech Quality (PESQ), which is a standard objective speech quality measure. The proposed algorithm offers considerable improvements in speech quality over binary-masked noisy speech and other reconstruction approaches.
Sparse representation for color image restoration. Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Singularity detection and processing with wavelets The mathematical characterization of singularities with Lipschitz exponents is reviewed. Theorems that estimate local Lipschitz exponents of functions from the evolution across scales of their wavelet transform are reviewed. It is then proven that the local maxima of the wavelet transform modulus detect the locations of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations has a particular behavior that is studied separately. The local frequency of such oscillations is measured from the wavelet transform modulus maxima. It has been shown numerically that one- and two-dimensional signals can be reconstructed, with a good approximation, from the local maxima of their wavelet transform modulus. As an application, an algorithm is developed that removes white noises from signals by analyzing the evolution of the wavelet transform maxima across scales. In two dimensions, the wavelet transform maxima indicate the location of edges in images.<>
Cubature Kalman Filters In this paper, we present a new nonlinear filter for high-dimensional state estimation, which we have named the cubature Kalman filter (CKF). The heart of the CKF is a spherical-radial cubature rule, which makes it possible to numerically compute multivariate moment integrals encountered in the nonlinear Bayesian filter. Specifically, we derive a third-degree spherical-radial cubature rule that provides a set of cubature points scaling linearly with the state-vector dimension. The CKF may therefore provide a systematic solution for high-dimensional nonlinear filtering problems. The paper also includes the derivation of a square-root version of the CKF for improved numerical stability. The CKF is tested experimentally in two nonlinear state estimation problems. In the first problem, the proposed cubature rule is used to compute the second-order statistics of a nonlinearly transformed Gaussian random variable. The second problem addresses the use of the CKF for tracking a maneuvering aircraft. The results of both experiments demonstrate the improved performance of the CKF over conventional nonlinear filters.
Efficient approximation of random fields for numerical applications This article is dedicated to the rapid computation of separable expansions for the approximation of random fields. We consider approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. Especially, we provide an a posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples are provided to validate and quantify the presented methods. Copyright (c) 2015 John Wiley & Sons, Ltd.
Coding Algorithms for 3DTV—A Survey Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development o- f algorithms.
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
Opposites and Measures of Extremism in Concepts and Constructs We discuss the distinction between different types of opposites, i.e. negation and antonym, in terms of their representation by fuzzy subsets. The idea of a construct in terms of Kelly's theory of personal construct is discussed. A measure of the extremism of a group of elements with respect to concept and its negation, and with respect to a concept and its antonym is introduced.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.071111
0.08
0.08
0.066667
0.004211
0
0
0
0
0
0
0
0
0
Order-theoretic, topological, categorical redundancies of interval-valued sets, grey sets, vague sets, interval-valued “intuitionistic” sets, “intuitionistic” fuzzy sets and topologies This paper demonstrates two meta-mathematical propositions concerning the increasingly popular ''intuitionistic'' (= vague) approaches to fuzzy sets and fuzzy topology, as well as the closely related interval-valued (= grey) sets and interval-valued ''intuitionistic'' sets: (1) the term ''intuitionistic'' in these contexts is historically inappropriate given the standard mathematical usage of ''intuitionistic''; and (2), at every level of existence-powerset level, topological fibre level, categorical level-interval-valued sets, interval-valued ''intuitionistic'' sets, and ''intuitionistic'' fuzzy sets and fuzzy topologies are redundant and represent unnecessarily complicated, strictly special subcases of standard fixed-basis set theory and topology. It therefore follows that theoretical workers should stop working in these restrictive and complicated programs and instead turn their efforts to substantial problems in the simpler and more general fixed-basis and variable-basis set theory and topology, while applied workers should carefully document the need or appropriateness of interval-valued or ''intuitionistic'' notions in applications.
Intuitionistic fuzzy sets: past, present and future Remarks on history, theory, and appli- cations of intuitionistic fuzzy sets are given. Some open problems are intro- duced.
Generalized rough sets based on relations Rough set theory has been proposed by Pawlak as a tool for dealing with the vagueness and granularity in information systems. The core concepts of classical rough sets are lower and upper approximations based on equivalence relations. This paper studies arbitrary binary relation based generalized rough sets. In this setting, a binary relation can generate a lower approximation operation and an upper approximation operation, but some of common properties of classical lower and upper approximation operations are no longer satisfied. We investigate conditions for a relation under which these properties hold for the relation based lower and upper approximation operations.
On Three Types of Covering-Based Rough Sets Rough set theory is a useful tool for data mining. It is based on equivalence relations and has been extended to covering-based generalized rough set. This paper studies three kinds of covering generalized rough sets for dealing with the vagueness and granularity in information systems. First, we examine the properties of approximation operations generated by a covering in comparison with those of the Pawlak's rough sets. Then, we propose concepts and conditions for two coverings to generate an identical lower approximation operation and an identical upper approximation operation. After the discussion on the interdependency of covering lower and upper approximation operations, we address the axiomization issue of covering lower and upper approximation operations. In addition, we study the relationships between the covering lower approximation and the interior operator and also the relationships between the covering upper approximation and the closure operator. Finally, this paper explores the relationships among these three types of covering rough sets.
Lattices of fuzzy sets and bipolar fuzzy sets, and mathematical morphology Mathematical morphology is based on the algebraic framework of complete lattices and adjunctions, which endows it with strong properties and allows for multiple extensions. In particular, extensions to fuzzy sets of the main morphological operators, such as dilation and erosion, can be done while preserving all properties of these operators. Another extension concerns bipolar fuzzy sets, where both positive information and negative information are handled, along with their imprecision. We detail these extensions from the point of view of the underlying lattice structure. In the case of bipolarity, its two-components nature raises the question of defining a proper partial ordering. In this paper, we consider Pareto (component-wise) and lexicographic orderings.
Possibility Theory in Constraint Satisfaction Problems: Handling Priority, Preference and Uncertainty In classical Constraint Satisfaction Problems (CSPs) knowledge is embedded in a set of hard constraints, each one restricting the possible values of a set of variables. However constraints in real world problems are seldom hard, and CSP's are often idealizations that do not account for the preference among feasible solutions. Moreover some constraints may have priority over others. Lastly, constraints may involve uncertain parameters. This paper advocates the use of fuzzy sets and possibility theory as a realistic approach for the representation of these three aspects. Fuzzy constraints encompass both preference relations among possible instanciations and priorities among constraints. In a Fuzzy Constraint Satisfaction Problem (FCSP), a constraint is satisfied to a degree (rather than satisfied or not satisfied) and the acceptability of a potential solution becomes a gradual notion. Even if the FCSP is partially inconsistent, best instanciations are provided owing to the relaxation of some constraints. Fuzzy constraints are thus flexible. CSP notions of consistency and k-consistency can be extended to this framework and the classical algorithms used in CSP resolution (e.g., tree search and filtering) can be adapted without losing much of their efficiency. Most classical theoretical results remain applicable to FCSPs. In the paper, various types of constraints are modelled in the same framework. The handling of uncertain parameters is carried out in the same setting because possibility theory can account for both preference and uncertainty. The presence of uncertain parameters lead to ill-defined CSPs, where the set of constraints which defines the problem is not precisely known.
Advances in type-2 fuzzy sets and systems In this state-of-the-art paper, important advances that have been made during the past five years for both general and interval type-2 fuzzy sets and systems are described. Interest in type-2 subjects is worldwide and touches on a broad range of applications and many interesting theoretical topics. The main focus of this paper is on the theoretical topics, with descriptions of what they are, what has been accomplished, and what remains to be done.
Extended triangular norms The paper is devoted to classical t-norms extended to operations on fuzzy quantities in accordance with the generalized Zadeh extension principle. Such extended t-norms are used for calculating intersection of type-2 fuzzy sets. Analytical expressions for membership functions of some extended t-norms are derived assuming special classes of fuzzy quantities, i.e., fuzzy truth intervals or fuzzy truth numbers. The possibility of applying these results in the construction of type-2 adaptive network fuzzy inference systems is illustrated on several examples.
Entropy and similarity measure of Atanassov’s intuitionistic fuzzy sets and their application to pattern recognition based on fuzzy measures In this study, we first examine entropy and similarity measure of Atanassov’s intuitionistic fuzzy sets, and define a new entropy. Meanwhile, a construction approach to get the similarity measure of Atanassov’s intuitionistic fuzzy sets is introduced, which is based on entropy. Since the independence of elements in a set is usually violated, it is not suitable to aggregate the values for patterns by additive measures. Based on the given entropy and similarity measure, we study their application to Atanassov’s intuitionistic fuzzy pattern recognition problems under fuzzy measures, where the interactions between features are considered. To overall reflect the interactive characteristics between them, we define three Shapley-weighted similarity measures. Furthermore, if the information about the weights of features is incompletely known, models for the optimal fuzzy measure on feature set are established. Moreover, an approach to pattern recognition under Atanassov’s intuitionistic fuzzy environment is developed.
A Model Based On Fuzzy Linguistic Information To Evaluate The Quality Of Digital Libraries The Web is changing the information access processes and it is one of the most important information media. Thus, the developments on the Web are having a great influence over the developments on others information access instruments as digital libraries. As the development of digital libraries is to satisfy user need, user satisfaction is essential for the success of a digital library. The aim of this paper is to present a model based on fuzzy linguistic information to evaluate the quality of digital libraries. The quality evaluation of digital libraries is defined using users' perceptions on the quality of digital services provided through their Websites. We assume a fuzzy linguistic modeling to represent the users' perception and apply automatic tools of fuzzy computing with words based on the LOWA and LWA operators to compute global quality evaluations of digital libraries. Additionally, we show an example of application of this model where three Spanish academic digital libraries are evaluated by fifty users.
Discovering fuzzy association rules using fuzzy partition methods Fuzzy association rules described by the natural language are well suited for the thinking of human subjects and will help to increase the flexibility for supporting users in making decisions or designing the fuzzy systems. In this paper, a new algorithm named fuzzy grids based rules mining algorithm (FGBRMA) is proposed to generate fuzzy association rules from a relational database. The proposed algorithm consists of two phases: one to generate the large fuzzy grids, and the other to generate the fuzzy association rules. A numerical example is presented to illustrate a detailed process for finding the fuzzy association rules from a specified database, demonstrating the effectiveness of the proposed algorithm.
On the fractional covering number of hypergraphs Thefractional covering number r* of a hypergraphH (V, E) is defined to be the minimum
Optimization objectives and models of variation for statistical gate sizing This paper approaches statistical optimization by examining gate delay variation models and optimization objectives. Most previous work on statistical optimization has focused exclusively on the optimization algorithms without considering the effects of the variation models and objective functions. This work empirically derives a simple variation model that is then used to optimize for robustness. Optimal results from example circuits used to study the effect of the statistical objective function on parametric yield.
Fuzzy control of technological processes in APL2 A fuzzy control system has been developed to solve problems which are difficult or impossible to control with a proportional integral differential approach. According to system constraints, the fuzzy controller changes the importance of the rules and offers suitable variable values. The fuzzy controller testbed consists of simulator code to simulate the process dynamics of a production and distribution system and the fuzzy controller itself. The results of our tests confirm that this approach successfully reflects the experience gained from skilled manual operations. The simulation and control software was developed in APL2/2 running under OS/2. Several features of this product, especially multitasking, the ability to run AP124 and AP207 windows concurrently, and the ability to run concurrent APL2 sessions and interchange data among them were used extensively in the simulation process.
1.029182
0.028827
0.028571
0.00635
0.003818
0.002428
0.000421
0.000078
0.000014
0.000003
0
0
0
0
Concepts of Net Theory
Mathematical Foundations of Computer Science 1989, MFCS'89, Porabka-Kozubnik, Poland, August 28 - September 1, 1989, Proceedings
Concurrent Behaviour: Sequences, Processes and Axioms Two ways of describing the behaviour of concurrent systems have widely been suggested: arbitrany interleaving and partial orders. Sometimes the latter has been claimed superior because concurrency is represented in a "true" way; on the other hand, some authors have claimed that the former is sufficient for all practical purposes.
On partial languages
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
Statistical Timing Analysis Considering Spatial Correlations using a Single Pert-Like Traversal We present an efficient statistical timing analysis algorithm thatpredicts the probability distribution of the circuit delay while incorporatingthe effects of spatial correlations of intra-die parametervariations, using a method based on principal component analysis.The method uses a PERT-like circuit graph traversal, and hasa run-time that is linear in the number of gates and interconnects,as well as the number of grid partitions used to model spatial correlations.On average, the mean and standard deviation valuescomputed by our method have errors of 0.2% and 0.9%, respectively,in comparison with a Monte Carlo simulation.
Fuzzy logic in control systems: fuzzy logic controller. I.
Compressed Remote Sensing of Sparse Objects The linear inverse source and scattering problems are studied from the perspective of compressed sensing. By introducing the sensor as well as target ensembles, the maximum number of recoverable targets is proved to be at least proportional to the number of measurement data modulo a log-square factor with overwhelming probability. Important contributions include the discoveries of the threshold aperture, consistent with the classical Rayleigh criterion, and the incoherence effect induced by random antenna locations. The predictions of theorems are confirmed by numerical simulations.
A Bayesian approach to image expansion for improved definition. Accurate image expansion is important in many areas of image analysis. Common methods of expansion, such as linear and spline techniques, tend to smooth the image data at edge regions. This paper introduces a method for nonlinear image expansion which preserves the discontinuities of the original image, producing an expanded image with improved definition. The maximum a posteriori (MAP) estimation techniques that are proposed for noise-free and noisy images result in the optimization of convex functionals. The expanded images produced from these methods will be shown to be aesthetically and quantitatively superior to images expanded by the standard methods of replication, linear interpolation, and cubic B-spline expansion.
Aggregation Using the Linguistic Weighted Average and Interval Type-2 Fuzzy Sets The focus of this paper is the linguistic weighted average (LWA), where the weights are always words modeled as interval type-2 fuzzy sets (IT2 FSs), and the attributes may also (but do not have to) be words modeled as IT2 FSs; consequently, the output of the LWA is an IT2 FS. The LWA can be viewed as a generalization of the fuzzy weighted average (FWA) where the type-1 fuzzy inputs are replaced by IT2 FSs. This paper presents the theory, algorithms, and an application of the LWA. It is shown that finding the LWA can be decomposed into finding two FWAs. Since the LWA can model more uncertainties, it should have wide applications in distributed and hierarchical decision-making.
On Linear and Semidefinite Programming Relaxations for Hypergraph Matching The hypergraph matching problem is to find a largest collection of disjoint hyperedges in a hypergraph. This is a well-studied problem in combinatorial optimization and graph theory with various applications. The best known approximation algorithms for this problem are all local search algorithms. In this paper we analyze different linear and semidefinite programming relaxations for the hypergraph matching problem, and study their connections to the local search method. Our main results are the following: • We consider the standard linear programming relaxation of the problem. We provide an algorithmic proof of a result of Füredi, Kahn and Seymour, showing that the integrality gap is exactly k-1 + 1/k for k-uniform hypergraphs, and is exactly k - 1 for k-partite hypergraphs. This yields an improved approximation algorithm for the weighted 3-dimensional matching problem. Our algorithm combines the use of the iterative rounding method and the fractional local ratio method, showing a new way to round linear programming solutions for packing problems. • We study the strengthening of the standard LP relaxation by local constraints. We show that, even after linear number of rounds of the Sherali-Adams lift-and-project procedure on the standard LP relaxation, there are k-uniform hypergraphs with integrality gap at least k - 2. On the other hand, we prove that for every constant k, there is a strengthening of the standard LP relaxation by only a polynomial number of constraints, with integrality gap at most (k + 1)/2 for k-uniform hypergraphs. The construction uses a result in extremal combinatorics. • We consider the standard semidefinite programming relaxation of the problem. We prove that the Lovász v-function provides an SDP relaxation with integrality gap at most (k + 1)/2. The proof gives an indirect way (not by a rounding algorithm) to bound the ratio between any local optimal solution and any optimal SDP solution. This shows a new connection between local search and linear and semidefinite programming relaxations.
Preferences and their application in evolutionary multiobjective optimization The paper describes a new preference method and its use in multiobjective optimization. These preferences are developed with a goal to reduce the cognitive overload associated with the relative importance of a certain criterion within a multiobjective design environment involving large numbers of objectives. Their successful integration with several genetic-algorithm-based design search and optimi...
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.205216
0.205216
0.205216
0.068405
0
0
0
0
0
0
0
0
0
0
Proceedings of the 45th Design Automation Conference, DAC 2008, Anaheim, CA, USA, June 8-13, 2008
Estimating path delay distribution considering coupling noise Accurately estimating critical path delays is extremely important for yield optimization and for path selection in delay testing. It is well known that dynamic effects such ascoupling noise can significantly affect critical path delays. In traditional static timing analysis, the coupling effect isincorporated by estimating the switching window overlaps between aggressor and victim and then assuming a constant (worst case) coupling factor if any overlap is present. However in path based statistical timing analysis, using a constant coupling factor can overestimate the mean delay while under estimating the delay variance. In this paper, we propose a technique to estimate the dynamic variation in pathdelay caused by coupling noise. We treat the effective coupling capacitance as a random variable that varies as a function of the relative signal arrival times between victim andaggressor nodes. A modeling technique to estimate the capacitance variation is shown and a framework that gives therelative signal arrival time distribution at the victim nodesis developed.
An Evaluation Method of the Number of Monte Carlo STA Trials for Statistical Path Delay Analysis We present an evaluation method for estimating the lower bound number of Monte Carlo STA trials required to obtain at least one sample which falls within top-k % of its parent population. The sample can be used to ensure that target designs are timing-error free with a predefined probability using the minimum computational cost. The lower bound number is represented as a closed-form formula which is general enough to be applied to other verifications. For validation, Monte Carlo STA was carried out on various benchmark data including ISCAS circuits. The minimum number of Monte Carlo runs determined using the proposed method successfully extracted one or more top-k % delay instances.
A unified framework for statistical timing analysis with coupling and multiple input switching As technology scales to smaller dimensions, increasing process variations, coupling induced delay variations and multiple input switching effects make timing verification extremely challenging. In this paper, we establish a theoretical framework for statistical timing analysis with coupling and multiple input switching. We prove the convergence of our proposed iterative approach and discuss implementation issues under the assumption of a Gaussian distribution for the parameters of variation. A statistical timer based on our proposed approach is developed and experimental results are presented for the IS-CAS benchmarks. We juxtapose our timer with a single pass, non iterative statistical timer that does not consider the mutual dependence of coupling with timing and another statistical timer that handles coupling deterministically. Monte Carlo simulations reveal a distinct gain (up to 24%) in accuracy by our approach in comparison to the others mentioned.
IES3: a fast integral equation solver for efficient 3-dimensional extraction Integral equation techniques are often used to extract models of integrated circuit structures. This extraction involves solving a dense system of linear equations, and using direct solution methods is prohibitive for large problems. In this paper, we present IES/sup 3/ (pronounced "ice cube"), a fast Integral Equation Solver for three-dimensional problems with arbitrary kernels. Extraction methods based on IES/sup 3/ are substantially more efficient than existing multipole-based approaches.
Correlation-preserved non-Gaussian statistical timing analysis with quadratic timing model Recent study shows that the existing first order canonical timing model is not sufficient to represent the dependency of the gate delay on the variation sources when processing and operational variations become more and more significant. Due to the nonlinearity of the mapping from variation sources to the gate/wire delay, the distribution of the delay is no longer Gaussian even if the variation sources are normally distributed. A novel quadratic timing model is proposed to capture the non-linearity of the dependency of gate/wire delays and arrival times on the variation sources. Systematic methodology is also developed to evaluate the correlation and distribution of the quadratic timing model. Based on these, a novel statistical timing analysis algorithm is propose which retains the complete correlation information during timing analysis and has the same computation complexity as the algorithm based on the canonical timing model. Tested on the ISCAS circuits, the proposed algorithm shows 10 × accuracy improvement over the existing first order algorithm while no significant extra runtime is needed.
ARMS - automatic residue-minimization based sampling for multi-point modeling techniques This paper describes an automatic methodology for optimizing sample point selection for using in the framework of model order reduction (MOR). The procedure, based on the maximization of the dimension of the subspace spanned by the samples, iteratively selects new samples in an efficient and automatic fashion, without computing the new vectors and with no prior assumptions on the system behavior. The scheme is general, and valid for single and multiple dimensions, with applicability on rational nominal MOR approaches, and on multi-dimensional sampling based parametric MOR methodologies. The paper also presents an integrated algorithm for multi-point MOR, with automatic sample and order selection based on the transfer function error estimation. Results on a variety of industrial examples demonstrate the accuracy and robustness of the technique.
Interval-Valued Reduced-Order Statistical Interconnect Modeling We show how advances in the handling of correlated interval representations of range uncertainty can be used to approximate the mass of a probability density function as it moves through numerical operations and, in particular, to predict the impact of statistical manufacturing variations on linear interconnect. We represent correlated statistical variations in resistance-inductance-capacitance parameters as sets of correlated intervals and show how classical model-order reduction methods - asymptotic waveform evaluation and passive reduced-order interconnect macromodeling algorithm - can be retargeted to compute interval-valued, rather than scalar-valued, reductions. By applying a simple statistical interpretation and sampling to the resulting compact interval-valued model, we can efficiently estimate the impact of variations on the original circuit. Results show that the technique can predict mean delay and standard deviation with errors between 5% and 10% for correlated parameter variations up to 35%.
The effective dimension and quasi-Monte Carlo integration Quasi-Monte Carlo (QMC) methods are successfully used for high-dimensional integrals arising in many applications. To understand this success, the notion of effective dimension has been introduced. In this paper, we analyse certain function classes commonly used in QMC methods for empirical and theoretical investigations and show that the problem of determining their effective dimension is analytically tractable. For arbitrary square integrable functions, we propose a numerical algorithm to compute their truncation dimension. We also consider some realistic problems from finance: the pricing of options. We study the special structure of the corresponding integrands by determining their effective dimension and show how large the effective dimension can be reduced and how much the accuracy of QMC estimates can be improved by using the Brownian bridge and the principal component analysis techniques. A critical discussion of the influence of these techniques on the QMC error is presented. The connection between the effective dimension and the performance of QMC methods is demonstrated by examples.
A novel criticality computation method in statistical timing analysis The impact of process variations increases as technology scales to nanometer region. Under large process variations, the path and arc/node criticality [18] provide effective metrics in guiding circuit optimization. To facilitate the criticality computation considering the correlation, we define the critical region for the path and arc/node in a timing graph, and propose an efficient method to compute the criticality for paths and arcs/nodes simultaneously by a single breadth-first graph traversal during the backward propagation. Instead of choosing a set of paths for analysis prematurely, we develop a new property of the path criticality to prune those paths with low criticality at very earlier stages, so that our path criticality computation method has linear complexity with respect of the timing edges in a timing graph. To improve the computation accuracy, cutset and path criticality properties are exploited to calibrate the computation results. The experimental results on ISCAS benchmark circuits show that our criticality computation method can achieve high accuracy with fast speed.
A training algorithm for optimal margin classifiers A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
A highly adaptive recommender system based on fuzzy logic for B2C e-commerce portals Past years have witnessed a growing interest in e-commerce as a strategy for improving business. Several paradigms have arisen from the e-commerce field in recent years which try to support different business activities, such as B2C and C2C. This paper introduces a prototype of e-commerce portal, called e-Zoco, of which main features are: (i) a catalogue service intended to arrange product categories hierarchically and describe them through sets of attributes, (ii) a product selection service able to deal with imprecise and vague search preferences which returns a set of results clustered in accordance with their potential relevance to the user, and (iii) a rule-based knowledge learning service to provide the users with knowledge about the existing relationships among the attributes that describe a given product category. The portal prototype is supported by a multi-agent infrastructure composed of a set of agents responsible for providing these and other services.
Optimality conditions for linear programming problems with fuzzy coefficients The optimality conditions for linear programming problems with fuzzy coefficients are derived in this paper. Two solution concepts are proposed by considering the orderings on the set of all fuzzy numbers. The solution concepts proposed in this paper will follow from the similar solution concept, called the nondominated solution, in the multiobjective programming problem. Under these settings, the optimality conditions will be naturally elicited.
An image super-resolution scheme based on compressive sensing with PCA sparse representation Image super-resolution (SR) reconstruction has been an important research fields due to its wide applications. Although many SR methods have been proposed, there are still some problems remain to be solved, and the quality of the reconstructed high-resolution (HR) image needs to be improved. To solve these problems, in this paper we propose an image super-resolution scheme based on compressive sensing theory with PCA sparse representation. We focus on the measurement matrix design of the CS process and the implementation of the sparse representation function for the PCA transformation. The measurement matrix design is based on the relation between the low-resolution (LR) image and the reconstructed high-resolution (HR) image. While the implementation of the PCA sparse representation function is based on the PCA transformation process. According to whether the covariance matrix of the HR image is known or not, two kinds of SR models are given. Finally the experiments comparing the proposed scheme with the traditional interpolation methods and CS scheme with DCT sparse representation are conducted. The experiment results both on the smooth image and the image with complex textures show that the proposed scheme in this paper is effective.
1.024401
0.03324
0.032561
0.006071
0.002615
0.001445
0.000777
0.000297
0.000098
0.000017
0
0
0
0
Compressive Acquisition of Dynamic Scenes Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models infeasible. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, from which the image frames are then reconstructed. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to considerably lower the compressive measurement rate considerably. We validate our approach with a range of experiments including classification experiments that highlight the effectiveness of the proposed approach.
Compressive mechanism: utilizing sparse representation in differential privacy Differential privacy provides the first theoretical foundation with provable privacy guarantee against adversaries with arbitrary prior knowledge. The main idea to achieve differential privacy is to inject random noise into statistical query results. Besides correctness, the most important goal in the design of a differentially private mechanism is to reduce the effect of random noise, ensuring that the noisy results can still be useful. This paper proposes the compressive mechanism, a novel solution on the basis of state-of-the-art compression technique, called compressive sensing. Compressive sensing is a decent theoretical tool for compact synopsis construction, using random projections. In this paper, we show that the amount of noise is significantly reduced from O(n) to O(log(n)), when the noise insertion procedure is carried on the synopsis samples instead of the original database. As an extension, we also apply the proposed compressive mechanism to solve the problem of continual release of statistical results. Extensive experiments using real datasets justify our accuracy claims.
Tensor sparse coding for region covariances Sparse representation of signals has been the focus of much research in the recent years. A vast majority of existing algorithms deal with vectors, and higher-order data like images are usually vectorized before processing. However, the structure of the data may be lost in the process, leading to poor representation and overall performance degradation. In this paper we propose a novel approach for sparse representation of positive definite matrices, where vectorization would have destroyed the inherent structure of the data. The sparse decomposition of a positive definite matrix is formulated as a convex optimization problem, which falls under the category of determinant maximization (MAXDET) problems [1], for which efficient interior point algorithms exist. Experimental results are shown with simulated examples as well as in real-world computer vision applications, demonstrating the suitability of the new model. This forms the first step toward extending the cornucopia of sparsity-based algorithms to positive definite matrices.
Motion estimated and compensated compressed sensing dynamic magnetic resonance imaging: What we can learn from video compression techniques Compressed sensing has become an extensive research area in MR community because of the opportunity for unprecedented high spatio-temporal resolution reconstruction. Because dynamic magnetic resonance imaging (MRI) usually has huge redundancy along temporal direction, compressed sensing theory can be effectively used for this application. Historically, exploiting the temporal redundancy has been the main research topics in video compression technique. This article compares the similarity and differences of compressed sensing dynamic MRI and video compression and discusses what MR can learn from the history of video compression research. In particular, we demonstrate that the motion estimation and compensation in video compression technique can be also a powerful tool to reduce the sampling requirement in dynamic MRI. Theoretical derivation and experimental results are presented to support our view. © 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 81–98, 2010
Nonnegative sparse coding for discriminative semi-supervised learning An informative and discriminative graph plays an important role in the graph-based semi-supervised learning methods. This paper introduces a nonnegative sparse algorithm and its approximated algorithm based on the l0-l1 equivalence theory to compute the nonnegative sparse weights of a graph. Hence, the sparse probability graph (SPG) is termed for representing the proposed method. The nonnegative sparse weights in the graph naturally serve as clustering indicators, benefiting for semi-supervised learning. More important, our approximation algorithm speeds up the computation of the nonnegative sparse coding, which is still a bottle-neck for any previous attempts of sparse non-negative graph learning. And it is much more efficient than using l1-norm sparsity technique for learning large scale sparse graph. Finally, for discriminative semi-supervised learning, an adaptive label propagation algorithm is also proposed to iteratively predict the labels of data on the SPG. Promising experimental results show that the nonnegative sparse coding is efficient and effective for discriminative semi-supervised learning.
Learning with dynamic group sparsity This paper investigates a new learning formulation called dynamic group sparsity. It is a natural extension of the standard sparsity concept in compressive sensing, and is motivated by the observation that in some practical sparse data the nonzero coefficients are often not random but tend to be clustered. Intuitively, better results can be achieved in these cases by reasonably utilizing both clustering and sparsity priors. Motivated by this idea, we have developed a new greedy sparse recovery algorithm, which prunes data residues in the iterative process according to both sparsity and group clustering priors rather than only sparsity as in previous methods. The proposed algorithm can recover stably sparse data with clustering trends using far fewer measurements and computations than current state-of-the-art algorithms with provable guarantees. Moreover, our algorithm can adaptively learn the dynamic group structure and the sparsity number if they are not available in the practical applications. We have applied the algorithm to sparse recovery and background subtraction in videos. Numerous experiments with improved performance over previous methods further validate our theoretical proofs and the effectiveness of the proposed algorithm.
Sparse Representation for Computer Vision and Pattern Recognition Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact high-fidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learne...
Exact Matrix Completion via Convex Optimization We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
Just relax: convex programming methods for identifying sparse signals in noise This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis
Compressed Sensing for Networked Data Imagine a system with thousands or millions of independent components, all capable of generating and communicating data. A man-made system of this complexity was unthinkable a few decades ago, but today it is a reality - computers, cell phones, sensors, and actuators are all linked to the Internet, and every wired or wireless device is capable of generating and disseminating prodigious volumes of data. This system is not a single centrally-controlled device, rather it is an ever-growing patchwork of autonomous systems and components, perhaps more organic in nature than any human artifact that has come before. And we struggle to manage and understand this creation, which in many ways has taken on a life of its own. Indeed, several international conferences are dedicated to the scientific study of emergent Internet phenomena. This article considers a particularly salient aspect of this struggle that revolves around large- scale distributed sources of data and their storage, transmission, and retrieval. The task of transmitting information from one point to another is a common and well-understood exercise. But the problem of efficiently transmitting or sharing information from and among a vast number of distributed nodes remains a great challenge, primarily because we do not yet have well developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems. The problem is illustrated by a simple example. Consider a network of n nodes, each having a piece of information or data xj, j = 1,...,n. These data could be files to be shared, or simply scalar values corresponding to node attributes or sensor measurements. Let us assume that each xj is a scalar quantity for the sake of this illustration. Collectively these data x = (x1,...,xn)T, arranged in a vector, are called networked data to emphasize both the distributed nature of the data and the fact that they may be shared over the underlying communications infrastructure of the network. The networked data vector may be very large; n may be a thousand or a million or more.
Coherence analysis of iterative thresholding algorithms There is a recent surge of interest in developing algorithms for finding sparse solutions of underdetermined systems of linear equations y = x. In many applications, extremely large problem sizes are envisioned, with at least tens of thousands of equations and hundreds of thousands of un- knowns. For such problem sizes, low computational complexity is paramount. The best studied '1 minimization algorithm is not fast enough to fulfill this need. Iterative thresholding algorithms have been proposed to address this problem. In this paper we want to analyze three of these algorithms theoretically, and give sufficient conditions under which they recover the sparsest solution. I. INTRODUCTION Finding the sparsest solution of an underdetermined sys- tem of linear equations y = xo, is a problem of interest in signal processing, data transmission, biology and statistics- just to name a few. Unfortunately, this problem is NP-hard and in general can not be solved by a polynomial time algorithm. Chen et al. (1) proposed the following convex optimization for recovering the sparsest solution; (Q1) minkxk1 s.t. x = y; where 'p-norm is defined askxkp = p pP ijxij p.
Joint sizing and adaptive independent gate control for FinFET circuits operating in multiple voltage regimes using the logical effort method FinFET has been proposed as an alternative for bulk CMOS in current and future technology nodes due to more effective channel control, reduced random dopant fluctuation, high ON/OFF current ratio, lower energy consumption, etc. Key characteristics of FinFET operating in the sub/near-threshold region are very different from those in the strong-inversion region. This paper first introduces an analytical transregional FinFET model with high accuracy in both sub- and near-threshold regimes. Next, the paper extends the well-known and widely-adopted logical effort delay calculation and optimization method to FinFET circuits operating in multiple voltage (sub/near/super-threshold) regimes. More specifically, a joint optimization of gate sizing and adaptive independent gate control is presented and solved in order to minimize the delay of FinFET circuits operating in multiple voltage regimes. Experimental results on a 32nm Predictive Technology Model for FinFET demonstrate the effectiveness of the proposed logical effort-based delay optimization framework.
Properties of Atanassov's intuitionistic fuzzy relations and Atanassov's operators The goal of this paper is to consider properties of Atanassov's intuitionistic fuzzy relations which were introduced by Atanassov in 1986. Fuzzy set theory turned out to be a useful tool to describe situations in which the data are imprecise or vague. Atanassov's intuitionistic fuzzy set theory is a generalization of fuzzy set theory which was introduced by Zadeh in 1965. This paper is a continuation of examinations by Pe@?kala [22] on the interval-valued fuzzy relations. We study standard properties of Atanassov's intuitionistic fuzzy relations in the context of Atanassov's operators.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.044145
0.049997
0.0444
0.044
0.0222
0.011109
0.00446
0.000694
0.000033
0
0
0
0
0
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods - such as first order perturbation theory or Monte Carlo sampling - Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner@?s procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15-20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.
Parameter Sensitivity Analysis in Medical Image Registration Algorithms Using Polynomial Chaos Expansions. Medical image registration algorithms typically involve numerous user-defined ‘tuning’ parameters, such as regularization weights, smoothing parameters, etc. Their optimal settings depend on the anatomical regions of interest, image modalities, image acquisition settings, the expected severity of deformations, and the clinical requirements. Within a particular application, the optimal settings could even vary across the image pairs to be registered. It is, therefore, crucial to develop methods that provide insight into the effect of each tuning parameter in interaction with the other tuning parameters and allow a user to efficiently identify optimal parameter settings for a given pair of images. An exhaustive search over all possible parameter settings has obvious disadvantages in terms of computational costs and quickly becomes infeasible in practice when the number of tuning parameters increases, due to the curse of dimensionality. In this study, we propose a method based on Polynomial Chaos Expansions (PCE). PCE is a method for sensitivity analysis that approximates the model of interest (in our case the registration of a given pair of images) by a polynomial expansion which can be evaluated very efficiently. PCE renders this approach feasible for a large number of input parameters, by requiring only a modest number of function evaluations for model construction. Once the PCE has been constructed, the sensitivity of the registration results to changes in the parameters can be quantified, and the user can simulate registration results for any combination of input parameters in real-time. The proposed approach is evaluated on 8 pairs of liver CT scans and the results indicate that PCE is a promising method for parameter sensitivity analysis in medical image registration.
Polynomial chaos expansion for sensitivity analysis In this paper, the computation of Sobol's sensitivity indices from the polynomial chaos expansion of a model output involving uncertain inputs is investigated. It is shown that when the model output is smooth with regards to the inputs, a spectral convergence of the computed sensitivity indices is achieved. However, even for smooth outputs the method is limited to a moderate number of inputs, say 10–20, as it becomes computationally too demanding to reach the convergence domain. Alternative methods (such as sampling strategies) are then more attractive. The method is also challenged when the output is non-smooth even when the number of inputs is limited.
Global sensitivity analysis using polynomial chaos expansions Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol’ indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol’ indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2–3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol’ indices.
Beyond Wiener---Askey Expansions: Handling Arbitrary PDFs In this paper we present a Multi-Element generalized Polynomial Chaos (ME-gPC) method to deal with stochastic inputs with arbitrary probability measures. Based on the decomposition of the random space of the stochastic inputs, we construct numerically a set of orthogonal polynomials with respect to a conditional probability density function (PDF) in each element and subsequently implement generalized Polynomial Chaos (gPC) locally. Numerical examples show that ME-gPC exhibits both p- and h-convergence for arbitrary probability measures
Generalized spectral decomposition for stochastic nonlinear problems We present an extension of the generalized spectral decomposition method for the resolution of nonlinear stochastic problems. The method consists in the construction of a reduced basis approximation of the Galerkin solution and is independent of the stochastic discretization selected (polynomial chaos, stochastic multi-element or multi-wavelets). Two algorithms are proposed for the sequential construction of the successive generalized spectral modes. They involve decoupled resolutions of a series of deterministic and low-dimensional stochastic problems. Compared to the classical Galerkin method, the algorithms allow for significant computational savings and require minor adaptations of the deterministic codes. The methodology is detailed and tested on two model problems, the one-dimensional steady viscous Burgers equation and a two-dimensional nonlinear diffusion problem. These examples demonstrate the effectiveness of the proposed algorithms which exhibit convergence rates with the number of modes essentially dependent on the spectrum of the stochastic solution but independent of the dimension of the stochastic approximation space.
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Managing incomplete preference relations in decision making: A review and future trends.
Genetic tuning of fuzzy rule deep structures preserving interpretability and its interaction with fuzzy rule set reduction Tuning fuzzy rule-based systems for linguistic fuzzy modeling is an interesting and widely developed task. It involves adjusting some of the components of the knowledge base without completely redefining it. This contribution introduces a genetic tuning process for jointly fitting the fuzzy rule symbolic representations and the meaning of the involved membership functions. To adjust the former component, we propose the use of linguistic hedges to perform slight modifications keeping a good interpretability. To alter the latter component, two different approaches changing their basic parameters and using nonlinear scaling factors are proposed. As the accomplished experimental study shows, the good performance of our proposal mainly lies in the consideration of this tuning approach performed at two different levels of significance. The paper also analyzes the interaction of the proposed tuning method with a fuzzy rule set reduction process. A good interpretability-accuracy tradeoff is obtained combining both processes with a sequential scheme: first reducing the rule set and subsequently tuning the model.
A Fast Approach for Overcomplete Sparse Decomposition Based on Smoothed ell 0 Norm In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined sparse component analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrar...
Rule-base structure identification in an adaptive-network-based fuzzy inference system We summarize Jang's architecture of employing an adaptive network and the Kalman filtering algorithm to identify the system parameters. Given a surface structure, the adaptively adjusted inference system performs well on a number of interpolation problems. We generalize Jang's basic model so that it can be used to solve classification problems by employing parameterized t-norms. We also enhance the model to include weights of importance so that feature selection becomes a component of the modeling scheme. Next, we discuss two ways of identifying system structures based on Jang's architecture: the top-down approach, and the bottom-up approach. We introduce a data structure, called a fuzzy binary boxtree, to organize rules so that the rule base can be matched against input signals with logarithmic efficiency. To preserve the advantage of parallel processing assumed in fuzzy rule-based inference systems, we give a parallel algorithm for pattern matching with a linear speedup. Moreover, as we consider the communication and storage cost of an interpolation model. We propose a rule combination mechanism to build a simplified version of the original rule base according to a given focus set. This scheme can be used in various situations of pattern representation or data compression, such as in image coding or in hierarchical pattern recognition
Fuzzy Power Command Enhancement in Mobile Communications Systems
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.11
0.1
0.007586
0.001619
0.000519
0.000034
0
0
0
0
0
0
0
0
Fixed point and Bregman iterative methods for matrix rank minimization The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htmfor non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10−5 in about 3 min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.
An implementable proximal point algorithmic framework for nuclear norm minimization The nuclear norm minimization problem is to find a matrix with the minimum nuclear norm subject to linear and second order cone constraints. Such a problem often arises from the convex relaxation of a rank minimization problem with noisy data, and arises in many fields of engineering and science. In this paper, we study inexact proximal point algorithms in the primal, dual and primal-dual forms for solving the nuclear norm minimization with linear equality and second order cone constraints. We design efficient implementations of these algorithms and present comprehensive convergence results. In particular, we investigate the performance of our proposed algorithms in which the inner sub-problems are approximately solved by the gradient projection method or the accelerated proximal gradient method. Our numerical results for solving randomly generated matrix completion problems and real matrix completion problems show that our algorithms perform favorably in comparison to several recently proposed state-of-the-art algorithms. Interestingly, our proposed algorithms are connected with other algorithms that have been studied in the literature.
Performance Analysis of Sparse Recovery Based on Constrained Minimal Singular Values The stability of sparse signal reconstruction with respect to measurement noise is investigated in this paper. We design efficient algorithms to verify the sufficient condition for unique ℓ1 sparse recovery. One of our algorithms produces comparable results with the state-of-the-art technique and performs orders of magnitude faster. We show that the ℓ1 -constrained minimal singular value (ℓ1-CMSV) of the measurement matrix determines, in a very concise manner, the recovery performance of ℓ1-based algorithms such as the Basis Pursuit, the Dantzig selector, and the LASSO estimator. Compared to performance analysis involving the Restricted Isometry Constant, the arguments in this paper are much less complicated and provide more intuition on the stability of sparse signal recovery. We show also that, with high probability, the subgaussian ensemble generates measurement matrices with ℓ1-CMSVs bounded away from zero, as long as the number of measurements is relatively large. To compute the ℓ1-CMSV and its lower bound, we design two algorithms based on the interior point algorithm and the semidefinite relaxation.
Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization The matrix rank minimization problem has applications in many fields, such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for solving the nuclear norm minimization problem (Math. Program., doi: 10.1007/s10107-009-0306-5, 2009). By incorporating an approximate singular value decomposition technique in this algorithm, the solution to the matrix rank minimization problem is usually obtained. In this paper, we study the convergence/recoverability properties of the fixed-point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving affinely constrained matrix rank minimization problems are reported.
Restricted Eigenvalue Properties for Correlated Gaussian Designs Methods based on l1-relaxation, such as basis pursuit and the Lasso, are very popular for sparse regression in high dimensions. The conditions for success of these methods are now well-understood: (1) exact recovery in the noiseless setting is possible if and only if the design matrix X satisfies the restricted nullspace property, and (2) the squared l2-error of a Lasso estimate decays at the minimax optimal rate k log p / n, where k is the sparsity of the p-dimensional regression problem with additive Gaussian noise, whenever the design satisfies a restricted eigenvalue condition. The key issue is thus to determine when the design matrix X satisfies these desirable properties. Thus far, there have been numerous results showing that the restricted isometry property, which implies both the restricted nullspace and eigenvalue conditions, is satisfied when all entries of X are independent and identically distributed (i.i.d.), or the rows are unitary. This paper proves directly that the restricted nullspace and eigenvalue conditions hold with high probability for quite general classes of Gaussian matrices for which the predictors may be highly dependent, and hence restricted isometry conditions can be violated with high probability. In this way, our results extend the attractive theoretical guarantees on l1-relaxations to a much broader class of problems than the case of completely independent or unitary designs.
Compressed sensing with cross validation Compressed sensing (CS) decoding algorithms can efficiently recover an N-dimensional real-valued vector x to within a factor of its best k-term approximation by taking m = O(k log N/k) measurements y = Φx. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting CS estimate. However, because the underlying sparsity of the signal x is unknown, the quality of a CS estimate x using m measurements is not assured. It is nevertheless shown in this paper that sharp bounds on the error ∥x - x∥l2N can be achieved with almost no effort. More precisely, suppose that a maximum number of measurements m is preimposed. One can reserve 10 logp of these m measurements and compute a sequence of possible estimates (xj)jp=1 to x from the m - 10logp remaining measurements; the errors ∥x - xj∥l2N for j = 1,...,p can then be bounded with high probability. As a consequence, numerical upper and lower bounds on the error between x and the best k-term approximation to x can be estimated for p values of k with almost no cost. This observation has applications outside CS as well.
Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem There has been continued interest in seeking a theorem describing optimal low-rank approximations to tensors of order 3 or higher that parallels the Eckart-Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-$r$ approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders, and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena.  In one extreme case, we exhibit a tensor space in which no rank-3 tensor has an optimal rank-2 approximation. The notable exceptions to this misbehavior are rank-1 tensors and order-2 tensors (i.e., matrices). In a more positive spirit, we propose a natural way of overcoming the ill-posedness of the low-rank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this  in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete low-dimensional examples as a first step toward more general results. To this end, we present a detailed analysis of equivalence classes of $2 \times 2 \times 2$ tensors, and we develop methods for extending results upward to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, it can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular, we make extensive use of the hyperdeterminant $\Delta$ on $\mathbb{R}^{2\times 2 \times 2}$.
Kronecker compressive sensing. Compressive sensing (CS) is an emerging approach for the acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve multidimensional signals; the construction of sparsifying bases and measurement systems for such signals is complicated by their higher dimensionality. In this paper, we propose the use of Kronecker product matrices in CS for two purposes. First, such matrices can act as sparsifying bases that jointly model the structure present in all of the signal dimensions. Second, such matrices can represent the measurement protocols used in distributed settings. Our formulation enables the derivation of analytical bounds for the sparse approximation of multidimensional signals and CS recovery performance, as well as a means of evaluating novel distributed measurement schemes.
pFFT in FastMaxwell: a fast impedance extraction solver for 3D conductor structures over substrate In this paper we describe the acceleration algorithm implemented in FastMaxwell, a program for wideband electromagnetic extraction of complicated 3D conductor structures over substrate. FastMaxwell is based on the integral domain mixed potential integral equation (MPIE) formulation, with 3-D full-wave substrate dyadic Green's function kernel. Two dyadic Green's functions are implemented. The pre-corrected Fast Fourier Transform (pFFT) algorithm is generalized and used to accelerate the translational invariant complex domain dyadic kernel. Computational results are given for a variety of structures to validate the accuracy and efficiency of FastMaxwell. O(NlogN) computational complexity is demonstrated by our results in both time and memory.
Joint Design-Time and Post-Silicon Minimization of Parametric Yield Loss using Adjustable Robust Optimization Parametric yield loss due to variability can be effectively reduced by both design-time optimization strategies and by adjusting circuit parameters to the realizations of variable parameters. The two levels of tuning operate within a single variability budget, and because their effectiveness depends on the magnitude and the spatial structure of variability their joint co-optimization is required. In this paper we develop a formal optimization algorithm for such co-optimization and link it to the control and measurement overhead via the formal notions of measurement and control complexity. We describe an optimization strategy that unifies design-time gate-level sizing and post-silicon adaptation using adaptive body bias at the chip level. The statistical formulation utilizes adjustable robust linear programming to derive the optimal policy for assigning body bias once the uncertain variables, such as gate length and threshold voltage, are known. Computational tractability is achieved by restricting optimal body bias selection policy to be an affine function of uncertain variables. We demonstrate good run-time and show that 5-35% savings in leakage power across the benchmark circuits are possible. Dependence of results on measurement and control complexity is studied and points of diminishing returns for both metrics are identified
Quadratic Statistical Max Approximation For Parametric Yield Estimation Of Analog/Rf Integrated Circuits In this paper, we propose an efficient numerical algorithm for estimating the parametric yield of analog/RF circuits, considering large-scale process variations. Unlike many traditional approaches that assume normal performance distributions, the proposed approach is particularly developed to handle multiple correlated nonnormal performance distributions, thereby providing better accuracy than the traditional techniques. Starting from a set of quadratic performance models, the proposed parametric yield estimation conceptually maps multiple correlated performance constraints to a single auxiliary constraint by using a MAX operator. As such, the parametric yield is uniquely determined by the probability distribution of the auxiliary constraint and, therefore, can easily be computed. In addition, two novel numerical algorithms are derived from moment matching and statistical Taylor expansion, respectively, to facilitate efficient quadratic statistical MAX approximation. We prove that these two algorithms are mathematically equivalent if the performance distributions are normal. Our numerical examples demonstrate that the proposed algorithm provides an error reduction of 6.5 times compared to a normal-distribution-based method while achieving a runtime speedup of 10-20 times over the Monte Carlo analysis with 103 samples.
Evaluating window joins over unbounded streams We investigate algorithms for evaluating sliding window joins over pairs of unbounded streams. We introduce a unit- time-basis cost model to analyze the expected performance of these algorithms. Using this cost model, we propose strategies for maximizing the efficiency of processing joins in three scenarios. First, we consider the case where one stream is much faster than the other. We show that asymmetric combinations of join algorithms, (e.g., hash join on one input, nested-loops join on the other) can outperform symmetric join algorithm implementations. Second, we investigate the case where system resources are insufficient to keep up with the input streams. We show that we can maximize the number of join result tuples produced in this case by properly allocating computing resources across the two input streams. Finally, we investigate strategies for maximizing the number of result tuples produced when memory is limited, and show that proper memory allocation across the two input streams can result in significantly lower resource usage and/or more result tuples produced.
VLSI hardware architecture for complex fuzzy systems This paper presents the design of a VLSI fuzzy processor, which is capable of dealing with complex fuzzy inference systems, i.e., fuzzy inferences that include rule chaining. The architecture of the processor is based on a computational model whose main features are: the capability to cope effectively with complex fuzzy inference systems; a detection phase of the rule with a positive degree of activation to reduce the number of rules to be processed per inference; parallel computation of the degree of activation of active rules; and representation of membership functions based on α-level sets. As the fuzzy inference can be divided into different processing phases, the processor is made up of a number of stages which are pipelined. In each stage several inference processing phases are performed parallelly. Its performance is in the order of 2 MFLIPS with 256 rules, eight inputs, two chained variables, and four outputs and 5.2 MFLIPS with 32 rules, three inputs, and one output with a clock frequency of 66 MHz
Fuzzy control of technological processes in APL2 A fuzzy control system has been developed to solve problems which are difficult or impossible to control with a proportional integral differential approach. According to system constraints, the fuzzy controller changes the importance of the rules and offers suitable variable values. The fuzzy controller testbed consists of simulator code to simulate the process dynamics of a production and distribution system and the fuzzy controller itself. The results of our tests confirm that this approach successfully reflects the experience gained from skilled manual operations. The simulation and control software was developed in APL2/2 running under OS/2. Several features of this product, especially multitasking, the ability to run AP124 and AP207 windows concurrently, and the ability to run concurrent APL2 sessions and interchange data among them were used extensively in the simulation process.
1.014985
0.012171
0.011765
0.008895
0.005882
0.002571
0.000838
0.000155
0.000038
0.000008
0
0
0
0
Computing discrepancies of Smolyak quadrature rules In recent years, Smolyak quadrature rules (also called quadratures on hyperbolic cross pointsor sparse grids) have gained interest as a possible competitor to number theoretic quadraturesfor high dimensional problems. A standard way of comparing the quality of multivariate quadratureformulas consists in computing their L 2 -discrepancy. Especially for larger dimensions, suchcomputations are a highly complex task. In this paper we develop a fast recursive algorithmfor computing the L 2...
Neural networks and approximation theory
Algorithm 672: generation of interpolatory quadrature rules of the highest degree of precision with preassigned nodes for general weight functions
A generalized discrepancy and quadrature error bound An error bound for multidimensional quadrature is derived that includes the Koksma-Hlawka inequality as a special case. This error bound takes the form of a product of two terms. One term, which depends only on the integrand, is dened as a generalized variation. The other term, which depends only on the quadrature rule, is dened as a generalized discrepancy. The generalized discrepancy is a gure of merit for quadrature rules and includes as special cases the Lp-star discrepancy and P that arises in the study of lattice rules.
Explicit cost bounds of algorithms for multivariate tensor product problems We study multivariate tensor product problems in the worst case and average casesettings. They are defined on functions of d variables. For arbitrary d, we provideexplicit upper bounds on the costs of algorithms which compute an &quot;-approximationto the solution. The cost bounds are of the form(c(d) + 2) fi 1`fi 2 + fi 3ln 1=&quot;d \Gamma 1" fi 4 (d\Gamma1) `1&quot;" fi 5:Here c(d) is the cost of one function evaluation (or one linear functional evaluation),and fi i "s do not...
Space-Time Approximation with Sparse Grids In this article we introduce approximation spaces, especially suited for the approximation of solutions of parabolic problems, which are based on the tensor product construction of a multiscale basis in space and a multiscale basis in time. Proper truncation then leads to so-called space-time sparse grid spaces. For a uniform discretization of the spatial space of dimension d with O(Nd) degrees of freedom, these spaces involve for d 1 also only O(Nd) degrees of freedom for the discretization of the whole space-time problem. But they provide the same approximation rate as classical space-time finite element spaces which need O(Nd+1) degrees of freedoms. This makes these approximation spaces well suited for conventional parabolic and time-dependent optimization problems. We analyze the approximation properties and the dimension of these sparse grid space-time spaces for general stable multiscale bases. We then restrict ourselves to an interpolatory multiscale basis, i.e., a hierarchical basis. Here, to be able to handle also complicated spatial domains Omega, we construct the hierarchical basis from a given spatial finite element basis as follows: First we determine coarse grid points recursively over the levels by the coarsening step of the algebraic multigrid method. Then, we derive interpolatory prolongation operators between the respective coarse and fine grid points by a least squares approach. This way we obtain an algebraic hierarchical basis for the spatial domain which we then use in our space-time sparse grid approach. We give numerical results on the convergence rate of the interpolation error of these spaces for various space-time problems with two spatial dimensions. Implementational issues, data structures, and questions of adaptivity also are addressed to some extent.
Performance evaluation of generalized polynomial chaos In this paper we review some applications of generalized polynomial chaos expansion for uncertainty quantification. The mathematical framework is presented and the convergence of the method is demonstrated for model problems. In particular, we solve the first-order and second-order ordinary differential equations with random parameters, and examine the efficiency of generalized polynomial chaos compared to Monte Carlo simulations. It is shown that the generalized polynomial chaos can be orders of magnitude more efficient than Monte Carlo simulations when the dimensionality of random input is low, e.g. for correlated noise.
Numerical analysis of the Burgers' equation in the presence of uncertainty The Burgers' equation with uncertain initial and boundary conditions is investigated using a polynomial chaos (PC) expansion approach where the solution is represented as a truncated series of stochastic, orthogonal polynomials. The analysis of well-posedness for the system resulting after Galerkin projection is presented and follows the pattern of the corresponding deterministic Burgers equation. The numerical discretization is based on spatial derivative operators satisfying the summation by parts property and weak boundary conditions to ensure stability. Similarly to the deterministic case, the explicit time step for the hyperbolic stochastic problem is proportional to the inverse of the largest eigenvalue of the system matrix. The time step naturally decreases compared to the deterministic case since the spectral radius of the continuous problem grows with the number of polynomial chaos coefficients. An estimate of the eigenvalues is provided. A characteristic analysis of the truncated PC system is presented and gives a qualitative description of the development of the system over time for different initial and boundary conditions. It is shown that a precise statistical characterization of the input uncertainty is required and partial information, e.g. the expected values and the variance, are not sufficient to obtain a solution. An analytical solution is derived and the coefficients of the infinite PC expansion are shown to be smooth, while the corresponding coefficients of the truncated expansion are discontinuous.
Sparse grid collocation schemes for stochastic natural convection problems In recent years, there has been an interest in analyzing and quantifying the effects of random inputs in the solution of partial differential equations that describe thermal and fluid flow problems. Spectral stochastic methods and Monte-Carlo based sampling methods are two approaches that have been used to analyze these problems. As the complexity of the problem or the number of random variables involved in describing the input uncertainties increases, these approaches become highly impractical from implementation and convergence points-of-view. This is especially true in the context of realistic thermal flow problems, where uncertainties in the topology of the boundary domain, boundary flux conditions and heterogeneous physical properties usually require high-dimensional random descriptors. The sparse grid collocation method based on the Smolyak algorithm offers a viable alternate method for solving high-dimensional stochastic partial differential equations. An extension of the collocation approach to include adaptive refinement in important stochastic dimensions is utilized to further reduce the numerical effort necessary for simulation. We show case the collocation based approach to efficiently solve natural convection problems involving large stochastic dimensions. Equilibrium jumps occurring due to surface roughness and heterogeneous porosity are captured. Comparison of the present method with the generalized polynomial chaos expansion and Monte-Carlo methods are made.
Quantics-TT Collocation Approximation of Parameter-Dependent and Stochastic Elliptic PDEs.
Statistical gate sizing for timing yield optimization Variability in the chip design process has been relatively increasing with technology scaling to smaller dimensions. Using worst case analysis for circuit optimization severely over-constrains the system and results in solutions with excessive penalties. Statistical timing analysis and optimization have consequently emerged as a refinement of the traditional static timing approach for circuit design optimization. In this paper, we propose a statistical gate sizing methodology for timing yield improvement. We build statistical models for gate delays from library characterizations at multiple process corners and operating conditions. Statistical timing analysis is performed, which drives gate sizing for timing yield optimization. Experimental results are reported for the ISCAS and MCNC benchmarks. In addition, we provide insight into statistical properties of gate delays for a given technology library which intuitively explains when and why statistical optimization improves over static timing optimization.
An isoperimetric lemma A continuous version of the following problem is solved: Let G be a multipartite graph with a given partition of its vertex set, A 1 ∪ A 2 ∪…∪ A N . Find the maximum possible number of edges in G such that G has no connected component with more than t vertices.
Computation of equilibrium measures. We present a new way of computing equilibrium measures numerically, based on the Riemann–Hilbert formulation. For equilibrium measures whose support is a single interval, the simple algorithm consists of a Newton–Raphson iteration where each step only involves fast cosine transforms. The approach is then generalized for multiple intervals.
Fuzzifying images using fuzzy wavelet denoising Fuzzy connected filters were recently introduced as an extension of connected filters within the fuzzy set framework. They rely on the representation of the image gray levels by fuzzy quantities, which are suitable to represent imprecision usually contained in images. No robust construction method of these fuzzy images has been introduced so far. In this paper we propose a generic method to fuzzify a crisp image in order to explicitly take imprecision on grey levels into account. This method is based on the conversion of statistical noise present in an image, which cannot be directly represented by fuzzy sets, into a denoising imprecision. The detectability of constant gray level structures in these fuzzy images is also discussed.
1.075894
0.050176
0.050176
0.018434
0.007125
0.000141
0.000077
0.000047
0.000027
0.000011
0.000001
0
0
0
On the Passivity of Polynomial Chaos-Based Augmented Models for Stochastic Circuits. This paper addresses for the first time the issue of passivity of the circuit models produced by means of the generalized polynomial chaos technique in combination with the stochastic Galerkin method. This approach has been used in literature to obtain statistical information through the simulation of an augmented but deterministic instance of a stochastic circuit, possibly including distributed t...
Fast Variational Analysis of On-Chip Power Grids by Stochastic Extended Krylov Subspace Method This paper proposes a novel stochastic method for analyzing the voltage drop variations of on-chip power grid networks, considering lognormal leakage current variations. The new method, called StoEKS, applies Hermite polynomial chaos to represent the random variables in both power grid networks and input leakage currents. However, different from the existing orthogonal polynomial-based stochastic simulation method, extended Krylov subspace (EKS) method is employed to compute variational responses from the augmented matrices consisting of the coefficients of Hermite polynomials. Our contribution lies in the acceleration of the spectral stochastic method using the EKS method to fast solve the variational circuit equations for the first time. By using the reduction technique, the new method partially mitigates increased circuit-size problem associated with the augmented matrices from the Galerkin-based spectral stochastic method. Experimental results show that the proposed method is about two-order magnitude faster than the existing Hermite PC-based simulation method and many order of magnitudes faster than Monte Carlo methods with marginal errors. StoEKS is scalable for analyzing much larger circuits than the existing Hermit PC-based methods.
Efficient Uncertainty Quantification for the Periodic Steady State of Forced and Autonomous Circuits This brief proposes an uncertainty quantification method for the periodic steady-state (PSS) analysis with both Gaussian and non-Gaussian variations. Our stochastic testing formulation for the PSS problem provides superior efficiency over both Monte Carlo methods and existing spectral methods. The numerical implementation of a stochastic shooting Newton solver is presented for both forced and autonomous circuits. Simulation results on some analog/RF circuits are reported to show the effectiveness of our proposed algorithms.
Stochastic Testing Method for Transistor-Level Uncertainty Quantification Based on Generalized Polynomial Chaos Uncertainties have become a major concern in integrated circuit design. In order to avoid the huge number of repeated simulations in conventional Monte Carlo flows, this paper presents an intrusive spectral simulator for statistical circuit analysis. Our simulator employs the recently developed generalized polynomial chaos expansion to perform uncertainty quantification of nonlinear transistor circuits with both Gaussian and non-Gaussian random parameters. We modify the nonintrusive stochastic collocation (SC) method and develop an intrusive variant called stochastic testing (ST) method. Compared with the popular intrusive stochastic Galerkin (SG) method, the coupled deterministic equations resulting from our proposed ST method can be solved in a decoupled manner at each time point. At the same time, ST requires fewer samples and allows more flexible time step size controls than directly using a nonintrusive SC solver. These two properties make ST more efficient than SG and than existing SC methods, and more suitable for time-domain circuit simulation. Simulation results of several digital, analog and RF circuits are reported. Since our algorithm is based on generic mathematical models, the proposed ST algorithm can be applied to many other engineering problems.
STAVES: Speedy Tensor-Aided Volterra-Based Electronic Simulator Volterra series is a powerful tool for blackbox macro-modeling of nonlinear devices. However, the exponential complexity growth in storing and evaluating higher order Volterra kernels has limited so far its employment on complex practical applications. On the other hand, tensors are a higher order generalization of matrices that can naturally and efficiently capture multi-dimensional data. Significant computational savings can often be achieved when the appropriate low-rank tensor decomposition is available. In this paper we exploit a strong link between tensors and frequency-domain Volterra kernels in modeling nonlinear systems. Based on such link we have developed a technique called speedy tensor-aided Volterra-based electronic simulator (STAVES) utilizing high-order Volterra transfer functions for highly accurate time-domain simulation of nonlinear systems. The main computational tools in our approach are the canonical tensor decomposition and the inverse discrete Fourier transform. Examples demonstrate the efficiency of the proposed method in simulating some practical nonlinear circuit structures.
ARMS - automatic residue-minimization based sampling for multi-point modeling techniques This paper describes an automatic methodology for optimizing sample point selection for using in the framework of model order reduction (MOR). The procedure, based on the maximization of the dimension of the subspace spanned by the samples, iteratively selects new samples in an efficient and automatic fashion, without computing the new vectors and with no prior assumptions on the system behavior. The scheme is general, and valid for single and multiple dimensions, with applicability on rational nominal MOR approaches, and on multi-dimensional sampling based parametric MOR methodologies. The paper also presents an integrated algorithm for multi-point MOR, with automatic sample and order selection based on the transfer function error estimation. Results on a variety of industrial examples demonstrate the accuracy and robustness of the technique.
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Moment-sensitivity-based wire sizing for skew reduction in on-chip clock nets Sensitivity-based methods for wire sizing have been shown to be effective in reducing clock skew in routed nets. However, lack of efficient sensitivity computation techniques and excessive space and time requirements often limit their utility for large clock nets. Furthermore, most skew reduction approaches work in terms of the Elmore delay model and, therefore, fail to balance the signal slopes at the clocked elements. In this paper, we extend the sensitivity-based techniques to balance the delays and signal-slopes by matching several moments instead of just the Elmore delay. As sensitivity computation is crucial to our approach, we present a new path-tracing algorithm to compute moment sensitivities for RC trees. Finally, to improve the runtime statistics of sensitivity-based methods, we also present heuristics to allow for efficient handling of large nets by reducing the size of the sensitivity matrix
Quantics-TT Collocation Approximation of Parameter-Dependent and Stochastic Elliptic PDEs.
Correlation-aware statistical timing analysis with non-Gaussian delay distributions Process variations have a growing impact on circuit performance for today's integrated circuit (IC) technologies. The non-Gaussian delay distributions as well as the correlations among delays make statistical timing analysis more challenging than ever. In this paper, the authors presented an efficient block-based statistical timing analysis approach with linear complexity with respect to the circuit size, which can accurately predict non-Gaussian delay distributions from realistic nonlinear gate and interconnect delay models. This approach accounts for all correlations, from manufacturing process dependence, to re-convergent circuit paths to produce more accurate statistical timing predictions. With this approach, circuit designers can have increased confidence in the variation estimates, at a low additional computation cost.
Compressive Sampling Vs. Conventional Imaging Compressive sampling (CS), or "Compressed Sensing," has recently generated a tremendous amount of excitement in the image processing community. CS involves taking a relatively small number of non-traditional samples in the form of ran- domized projections that are capable of capturing the most salient information in an image. If the image being sampled is compressible in a certain basis (e.g., wavelet), then under noiseless conditions the image can be much more accurately recovered from random projections than from pixel samples. However, the performance of CS can degrade markedly in the presence of noise. In this paper, we compare CS to conven- tional imaging by considering a canonical class of piecewise smooth image models. Our conclusion is that CS can be ad- vantageous in noisy imaging problems if the underlying im- age is highly compressible or if the SNR is sufficiently large.
Abstract processes of place/transition systems A well-known problem in Petri net theory is to formalise an appropriate causality-based concept of process or run for place/transition systems. The so-called individual token interpretation, where tokens are distinguished according to their causal history, giving rise to the processes of Goltz and Reisig, is often considered too detailed. The problem of defining a fully satisfying more abstract concept of process for general place/transition systems has so-far not been solved. In this paper, we recall the proposal of defining an abstract notion of process, here called BD-process, in terms of equivalence classes of Goltz-Reisig processes, using an equivalence proposed by Best and Devillers. It yields a fully satisfying solution for at least all one-safe nets. However, for certain nets which intuitively have different conflicting behaviours, it yields only one maximal abstract process. Here we identify a class of place/transition systems, called structural conflict nets, where conflict and concurrency due to token multiplicity are clearly separated. We show that, in the case of structural conflict nets, the equivalence proposed by Best and Devillers yields a unique maximal abstract process only for conflict-free nets. Thereby BD-processes constitute a simple and fully satisfying solution in the class of structural conflict nets.
Compressed sensing for efficient random routing in multi-hop wireless sensor networks Compressed sensing (CS) is a novel theory based on the fact that certain signals can be recovered from a relatively small number of non-adaptive linear projections, when the original signals and the compression matrix own certain properties. In virtue of these advantages, compressed sensing, as a promising technique to deal with large amount of data, is attracting ever-increasing interests in the areas of wireless sensor networks where most of the sensing data are the same besides a few deviant ones. However, the applications of traditional CS in such settings are limited by the huge transport cost caused by dense measurement. To solve this problem, we propose several ameliorated random routing methods executed with sparse measurement based CS for efficient data gathering corresponding to different networking topologies in typical wireless sensor networking environment, and analyze the relevant performances comparing with those of the existing data gathering schemes, obtaining the conclusion that the proposed schemes are effective in signal reconstruction and efficient in reducing energy consumption cost by routing. Our proposed schemes are also available in heterogeneous networks, for the data to be dealt with in CS are not necessarily homogeneous.
Analyzing parliamentary elections based on voting advice application data The main goal of this paper is to model the values of Finnish citizens and the members of the parliament. To achieve this goal, two databases are combined: voting advice application data and the results of the parliamentary elections in 2011. First, the data is converted to a high-dimension space. Then, it is projected to two principal components. The projection allows us to visualize the main differences between the parties. The value grids are produced with a kernel density estimation method without explicitly using the questions of the voting advice application. However, we find meaningful interpretations for the axes in the visualizations with the analyzed data. Subsequently, all candidate value grids are weighted by the results of the parliamentary elections. The result can be interpreted as a distribution grid for Finnish voters' values.
1.207359
0.041594
0.024839
0.010302
0.003653
0.00193
0.001406
0.000757
0.000243
0.000039
0
0
0
0
FIOWHM operator and its application to multiple attribute group decision making To study the problem of multiple attribute decision making in which the decision making information values are triangular fuzzy number, a new group decision making method is proposed. Then the calculation steps to solve it are given. As the key step, a new operator called fuzzy induced ordered weighted harmonic mean (FIOWHM) operator is proposed and a method based on the fuzzy weighted harmonic mean (FWHM) operator and FIOWHM operators for fuzzy MAGDM is presented. The priority based on possibility degree for the fuzzy multiple attribute decision making problem is proposed. At last, a numerical example is provided to illustrate the proposed method. The result shows the approach is simple, effective and easy to calculate.
Comparing approximate reasoning and probabilistic reasoning using the Dempster--Shafer framework We investigate the problem of inferring information about the value of a variable V from its relationship with another variable U and information about U. We consider two approaches, one using the fuzzy set based theory of approximate reasoning and the other using probabilistic reasoning. Both of these approaches allow the inclusion of imprecise granular type information. The inferred values from each of these methods are then represented using a Dempster-Shafer belief structure. We then compare these values and show an underling unity between these two approaches.
THE FUZZY GENERALIZED OWA OPERATOR AND ITS APPLICATION IN STRATEGIC DECISION MAKING We present the fuzzy generalized ordered weighted averaging (FGOWA) operator. It is an extension of the GOWA operator for uncertain situations where the available information is given in the form of fuzzy numbers. This generalization includes a wide range of mean operators such as the fuzzy average (FA), the fuzzy OWA (FOWA), and the fuzzy generalized mean (FGM). We also develop a further generalization by using quasi-arithmetic means that we call the quasi-FOWA operator. The article ends with an illustrative example where we apply the new approach in the selection of strategies.
Using trapezoids for representing granular objects: Applications to learning and OWA aggregation We discuss the role and benefits of using trapezoidal representations of granular information. We focus on the use of level sets as a tool for implementing many operations on trapezoidal sets. We point out the simplification that the linearity of the trapezoid brings by requiring us to perform operations on only two level sets. We investigate the classic learning algorithm in the case when our observations are granule objects represented as trapezoidal fuzzy sets. An important issue that arises is the adverse effect that very uncertain observations have on the quality of our estimates. We suggest an approach to addressing this problem using the specificity of the observations to control its effect. We next consider the OWA aggregation of information represented as trapezoids. An important problem that arises here is the ordering of the trapezoidal fuzzy sets needed for the OWA aggregation. We consider three approaches to accomplish this ordering based on the location, specificity and fuzziness of the trapezoids. From these three different approaches three fundamental methods of ordering are developed. One based on the mean of the 0.5 level sets, another based on the length of the 0.5 level sets and a third based on the difference in lengths of the core and support level sets. Throughout this work particular emphasis is placed on the simplicity of working with trapezoids while still retaining a rich representational capability.
Type-1 OWA operators for aggregating uncertain information with uncertain weights induced by type-2 linguistic quantifiers The OWA operator proposed by Yager has been widely used to aggregate experts' opinions or preferences in human decision making. Yager's traditional OWA operator focuses exclusively on the aggregation of crisp numbers. However, experts usually tend to express their opinions or preferences in a very natural way via linguistic terms. These linguistic terms can be modelled or expressed by (type-1) fuzzy sets. In this paper, we define a new type of OWA operator, the type-1 OWA operator that works as an uncertain OWA operator to aggregate type-1 fuzzy sets with type-1 fuzzy weights, which can be used to aggregate the linguistic opinions or preferences in human decision making with linguistic weights. The procedure for performing type-1 OWA operations is analysed. In order to identify the linguistic weights associated to the type-1 OWA operator, type-2 linguistic quantifiers are proposed. The problem of how to derive linguistic weights used in type-1 OWA aggregation given such type of quantifier is solved. Examples are provided to illustrate the proposed concepts.
Evaluating new product development performance by fuzzy linguistic computing New product development (NPD) is indeed the cornerstone for companies to maintain and enhance the competitive edge. However, developing new products is a complex and risky decision-making process. It involves a search of the environment for opportunities, the generation of project options, and the evaluation by different experts of multiple attributes, both qualitative and quantitative. To perceive and to measure effectively the capability of NPD are real challenging tasks for business managers. This paper presents a 2-tuple fuzzy linguistic computing approach to deal with heterogeneous information and information loss problems during the processes of subjective evaluation integration. The proposed method which is based on the group decision-making scenario to assist business managers to measure the performance of NPD manipulates the heterogeneous integration processes and avoids the information loss effectively. Finally, its feasibility is demonstrated by the result of NPD performance evaluation for a high-technology company in Taiwan.
Computing With Words for Hierarchical Decision Making Applied to Evaluating a Weapon System The perceptual computer (Per-C) is an architecture that makes subjective judgments by computing with words (CWWs). This paper applies the Per-C to hierarchical decision making, which means decision making based on comparing the performance of competing alternatives, where each alternative is first evaluated based on hierarchical criteria and subcriteria, and then, these alternatives are compared to arrive at either a single winner or a subset of winners. What can make this challenging is that the inputs to the subcriteria and criteria can be numbers, intervals, type-1 fuzzy sets, or even words modeled by interval type-2 fuzzy sets. Novel weighted averages are proposed in this paper as a CWW engine in the Per-C to aggregate these diverse inputs. A missile-evaluation problem is used to illustrate it. The main advantages of our approaches are that diverse inputs can be aggregated, and uncertainties associated with these inputs can be preserved and are propagated into the final evaluation.
Group decision-making model using fuzzy multiple attributes analysis for the evaluation of advanced manufacturing technology Selection of advanced manufacturing technology is important for improving manufacturing system competitiveness. This study builds a group decision-making model using fuzzy multiple attributes analysis to evaluate the suitability of manufacturing technology. Since numerous attributes have been considered in evaluating the manufacturing technology suitability, most information available in this stage is subjective and imprecise, and fuzzy sets theory provides a mathematical framework for modeling imprecision and vagueness. The proposed approach involved developing a fusion method of fuzzy information, which was assessed using both linguistic and numerical scales. In addition, an interactive decision analysis is developed to make a consistent decision. When evaluating the suitability of manufacturing technology, it may be necessary to improve upon the technology, and naturally advanced manufacturing technology is seen as the best direction for improvement. The flexible manufacturing system adopted in the Taiwanese bicycle industry is used in this study to illustrate the computational process of the proposed method. The results of this study are more objective and unbiased, owing to being generated by a group of decision-makers.
A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory
Modeling rationality in a linguistic framework In classical decision theory there exists a large class of rationality models which try to capture different kinds of behavior when individuals compare by pairs a set of alternatives. All these models assume that decision makers have dichotomous preferences. However, in real decisions individuals feel different degrees of preference. In this paper we have checked the mentioned models in a real case where different kinds of linguistic preferences are allowed. After the empirical analysis, the main conclusion is that the fulfillment of rational conditions decreases when individuals have non-extreme preferences. Based on the obtained empirical evidences, we propose some classes of transitivity conditions in the framework of linguistic preferences.
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
A Representation-Oriented Taxonomy of Gradation Gradation, the presence of gradual rather than abrupt boundaries around geographic entities, is one of the many complexities of geography which is beginning to be investigated for representation and analysis informal models. Much of the research to date has been focused on specific applications, but some are starting to look at the underlying theory behind this phenomenon, leading toward better understanding and better models. This work extends this theory with a taxonomy which describes and explains gradational situations, focusing on issues related to formal representation. This taxonomy has been beneficial in developing methods of representing this phenomenon in GIS and maps.
Interval-valued reduced order statistical interconnect modeling We show how recent advances in the handling of correlated interval representations of range uncertainty can be used to predict the im- pact of statistical manufacturing variations on linear interconnect. We represent correlated statistical variations in RLC parameters as sets of correlated intervals, and show how classical model order reduction methods - AWE and PRIMA - can be re-targeted to com- pute interval-valued, rather than scalar-valued reductions. By ap- plying a statisticalinterpretation and sampling to the resulting com- pact interval-valued model, we can efficiently estimate the impact of variations on the original circuit. Results show the technique can predict mean delay with errors between 5-10%, for correlated RLC parameter variations up to 35%
Increasing Depth Resolution Of Electron Microscopy Of Neural Circuits Using Sparse Tomographic Reconstruction Future progress in neuroscience hinges on reconstruction of neuronal circuits to the level of individual synapses. Because of the specifics of neuronal architecture, imaging must be done with very high resolution and throughput. While Electron Microscopy (EM) achieves the required resolution in the transverse directions, its depth resolution is a severe limitation. Computed tomography (CT) may be used in conjunction with electron microscopy to improve the depth resolution, but this severely limits the throughput since several tens or hundreds of EM images need to be acquired. Here, we exploit recent advances in signal processing to obtain high depth resolution EM images computationally. First, we show that the brain tissue can be represented as sparse linear combination of local basis functions that are thin membrane-like structures oriented in various directions. We then develop reconstruction techniques inspired by compressive sensing that can reconstruct the brain tissue from very few (typically 5) tomographic views of each section. This enables tracing of neuronal connections across layers and, hence, high throughput reconstruction of neural circuits to the level of individual synapses.
1.111766
0.12
0.031987
0.020536
0.005218
0.000344
0.000107
0.000023
0.000008
0.000002
0
0
0
0
Tolerating correlated failures in Massively Parallel Stream Processing Engines Fault-tolerance techniques for stream processing engines can be categorized into passive and active approaches. A typical passive approach periodically checkpoints a processing task's runtime states and can recover a failed task by restoring its runtime state using its latest checkpoint. On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE). The passive approach incurs a long recovery latency especially when a number of correlated nodes fail simultaneously, while the active approach requires extra replication resources. In this paper, we propose a new fault-tolerance framework, which is Passive and Partially Active (PPA). In a PPA scheme, the passive approach is applied to all tasks while only a selected set of tasks will be actively replicated. The number of actively replicated tasks depends on the available resources. If tasks without active replicas fail, tentative outputs will be generated before the completion of the recovery process. We also propose effective and efficient algorithms to optimize a partially active replication plan to maximize the quality of tentative outputs. We implemented PPA on top of Storm, an open-source MPSPE and conducted extensive experiments using both real and synthetic datasets to verify the effectiveness of our approach.
Minimum Backups for Stream Processing With Recovery Latency Guarantees. The stream processing model continuously processes online data in an on-pass fashion that can be more vulnerable to failures than other big-data processing schemes. Existing fault-tolerant (FT) approaches have been presented to enhance the reliability of stream processing systems. However, the fundamental tradeoff between recovery latency and FT overhead is still unclear, so these scheme cannot pr...
Task Allocation for Stream Processing with Recovery Latency Guarantee Stream processing applications continuously process large amounts of online streaming data in real-time or near real-time. They have strict latency constraints, but they are also vulnerable to failures. Failure recoveries may slow down the entire processing pipeline and break latency constraints. Upstream backup is one of the most widely applied fault-tolerant schemes for stream processing systems. It introduces complex backup dependencies to tasks, and increases the difficulty of controlling recovery latencies. Moreover, when dependent tasks are located on the same processor, they fail at the same time in processor-level failures, bringing extra recovery latencies that increase the impacts of failures. This paper presents a correlated failure effect model to describe the recovery latency of a stream topology in processor-level failures for an allocation plan. We introduce a Recovery-latency-aware Task Allocation Problem (RTAP) that seeks task allocation plans for stream topologies that will achieve guaranteed recovery latencies. We present a heuristic algorithm with a computational complexity of O(nlog^2n) to solve the problem. Extensive experiments were conducted to verify the correctness and effectiveness of our approach.
A latency and fault-tolerance optimizer for online parallel query plans We address the problem of making online, parallel query plans fault-tolerant: i.e., provide intra-query fault-tolerance without blocking. We develop an approach that not only achieves this goal but does so through the use of different fault-tolerance techniques at different operators within a query plan. Enabling each operator to use a different fault-tolerance strategy leads to a space of fault-tolerance plans amenable to cost-based optimization. We develop FTOpt, a cost-based fault-tolerance optimizer that automatically selects the best strategy for each operator in a query plan in a manner that minimizes the expected processing time with failures for the entire query. We implement our approach in a prototype parallel query-processing engine. Our experiments demonstrate that (1) there is no single best fault-tolerance strategy for all query plans, (2) often hybrid strategies that mix-and-match recovery techniques outperform any uniform strategy, and (3) our optimizer correctly identifies winning fault-tolerance configurations.
Discretized streams: fault-tolerant streaming computation at scale Many "big data" applications must act on data in real time. Running these applications at ever-larger scales requires parallel platforms that automatically handle faults and stragglers. Unfortunately, current distributed stream processing models provide fault recovery in an expensive manner, requiring hot replication or long recovery times, and do not handle stragglers. We propose a new processing model, discretized streams (D-Streams), that overcomes these challenges. D-Streams enable a parallel recovery mechanism that improves efficiency over traditional replication and backup schemes, and tolerates stragglers. We show that they support a rich set of operators while attaining high per-node throughput similar to single-node systems, linear scaling to 100 nodes, sub-second latency, and sub-second fault recovery. Finally, D-Streams can easily be composed with batch and interactive query models like MapReduce, enabling rich applications that combine these modes. We implement D-Streams in a system called Spark Streaming.
IBM infosphere streams for scalable, real-time, intelligent transportation services With the widespread adoption of location tracking technologies like GPS, the domain of intelligent transportation services has seen growing interest in the last few years. Services in this domain make use of real-time location-based data from a variety of sources, combine this data with static location-based data such as maps and points of interest databases, and provide useful information to end-users. Some of the major challenges in this domain include i) scalability, in terms of processing large volumes of real-time and static data; ii) extensibility, in terms of being able to add new kinds of analyses on the data rapidly, and iii) user interaction, in terms of being able to support different kinds of one-time and continuous queries from the end-user. In this paper, we demonstrate the use of IBM InfoSphere Streams, a scalable stream processing platform, for tackling these challenges. We describe a prototype system that generates dynamic, multi-faceted views of transportation information for the city of Stockholm, using real vehicle GPS and road-network data. The system also continuously derives current traffic statistics, and provides useful value-added information such as shortest-time routes from real-time observed and inferred traffic conditions. Our performance experiments illustrate the scalability of the system. For instance, our system can process over 120000 incoming GPS points per second, combine it with a map containing over 600,000 links, continuously generate different kinds of traffic statistics and answer user queries.
First-order incremental block-based statistical timing analysis Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.
Overview of the Scalable Video Coding Extension of the H.264/AVC Standard With the introduction of the H.264/AVC video coding standard, significant improvements have recently been demonstrated in video compression capability. The Joint Video Team of the ITU-T VCEG and the ISO/IEC MPEG has now also standardized a Scalable Video Coding (SVC) extension of the H.264/AVC standard. SVC enables the transmission and decoding of partial bit streams to provide video services with lower temporal or spatial resolutions or reduced fidelity while retaining a reconstruction quality that is high relative to the rate of the partial bit streams. Hence, SVC provides functionalities such as graceful degradation in lossy transmission environments as well as bit rate, format, and power adaptation. These functionalities provide enhancements to transmission and storage applications. SVC has achieved significant improvements in coding efficiency with an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. This paper provides an overview of the basic concepts for extending H.264/AVC towards SVC. Moreover, the basic tools for providing temporal, spatial, and quality scalability are described in detail and experimentally analyzed regarding their efficiency and complexity.
Type-2 fuzzy ontology-based semantic knowledge for collision avoidance of autonomous underwater vehicles. The volume of obstacles encountered in the marine environment is rapidly increasing, which makes the development of collision avoidance systems more challenging. Several fuzzy ontology-based simulators have been proposed to provide a virtual platform for the analysis of maritime missions. However, due to the simulators’ limitations, ontology-based knowledge cannot be utilized to evaluate maritime robot algorithms and to avoid collisions. The existing simulators must be equipped with smart semantic domain knowledge to provide an efficient framework for the decision-making system of AUVs. This article presents type-2 fuzzy ontology-based semantic knowledge (T2FOBSK) and a simulator for marine users that will reduce experimental time and the cost of marine robots and will evaluate algorithms intelligently. The system reformulates the user’s query to extract the positions of AUVs and obstacles and convert them to a proper format for the simulator. The simulator uses semantic knowledge to calculate the degree of collision risk and to avoid obstacles. The available type-1 fuzzy ontology-based approach cannot extract intensively blurred data from the hazy marine environment to offer actual solutions. Therefore, we propose a type-2 fuzzy ontology to provide accurate information about collision risk and the marine environment during real-time marine operations. Moreover, the type-2 fuzzy ontology is designed using Protégé OWL-2 tools. The DL query and SPARQL query are used to evaluate the ontology. The distance to closest point of approach (DCPA), time to closest point of approach (TCPA) and variation of compass degree (VCD) are used to calculate the degree of collision risk between AUVs and obstacles. The experimental and simulation results show that the proposed architecture is highly efficient and highly productive for marine missions and the real-time decision-making system of AUVs.
Is there a need for fuzzy logic? ''Is there a need for fuzzy logic?'' is an issue which is associated with a long history of spirited discussions and debate. There are many misconceptions about fuzzy logic. Fuzzy logic is not fuzzy. Basically, fuzzy logic is a precise logic of imprecision and approximate reasoning. More specifically, fuzzy logic may be viewed as an attempt at formalization/mechanization of two remarkable human capabilities. First, the capability to converse, reason and make rational decisions in an environment of imprecision, uncertainty, incompleteness of information, conflicting information, partiality of truth and partiality of possibility - in short, in an environment of imperfect information. And second, the capability to perform a wide variety of physical and mental tasks without any measurements and any computations [L.A. Zadeh, From computing with numbers to computing with words - from manipulation of measurements to manipulation of perceptions, IEEE Transactions on Circuits and Systems 45 (1999) 105-119; L.A. Zadeh, A new direction in AI - toward a computational theory of perceptions, AI Magazine 22 (1) (2001) 73-84]. In fact, one of the principal contributions of fuzzy logic - a contribution which is widely unrecognized - is its high power of precisiation. Fuzzy logic is much more than a logical system. It has many facets. The principal facets are: logical, fuzzy-set-theoretic, epistemic and relational. Most of the practical applications of fuzzy logic are associated with its relational facet. In this paper, fuzzy logic is viewed in a nonstandard perspective. In this perspective, the cornerstones of fuzzy logic - and its principal distinguishing features - are: graduation, granulation, precisiation and the concept of a generalized constraint. A concept which has a position of centrality in the nontraditional view of fuzzy logic is that of precisiation. Informally, precisiation is an operation which transforms an object, p, into an object, p^*, which in some specified sense is defined more precisely than p. The object of precisiation and the result of precisiation are referred to as precisiend and precisiand, respectively. In fuzzy logic, a differentiation is made between two meanings of precision - precision of value, v-precision, and precision of meaning, m-precision. Furthermore, in the case of m-precisiation a differentiation is made between mh-precisiation, which is human-oriented (nonmathematical), and mm-precisiation, which is machine-oriented (mathematical). A dictionary definition is a form of mh-precisiation, with the definiens and definiendum playing the roles of precisiend and precisiand, respectively. Cointension is a qualitative measure of the proximity of meanings of the precisiend and precisiand. A precisiand is cointensive if its meaning is close to the meaning of the precisiend. A concept which plays a key role in the nontraditional view of fuzzy logic is that of a generalized constraint. If X is a variable then a generalized constraint on X, GC(X), is expressed as X isr R, where R is the constraining relation and r is an indexical variable which defines the modality of the constraint, that is, its semantics. The primary constraints are: possibilistic, (r=blank), probabilistic (r=p) and veristic (r=v). The standard constraints are: bivalent possibilistic, probabilistic and bivalent veristic. In large measure, science is based on standard constraints. Generalized constraints may be combined, qualified, projected, propagated and counterpropagated. The set of all generalized constraints, together with the rules which govern generation of generalized constraints, is referred to as the generalized constraint language, GCL. The standard constraint language, SCL, is a subset of GCL. In fuzzy logic, propositions, predicates and other semantic entities are precisiated through translation into GCL. Equivalently, a semantic entity, p, may be precisiated by representing its meaning as a generalized constraint. By construction, fuzzy logic has a much higher level of generality than bivalent logic. It is the generality of fuzzy logic that underlies much of what fuzzy logic has to offer. Among the important contributions of fuzzy logic are the following: 1.FL-generalization. Any bivalent-logic-based theory, T, may be FL-generalized, and hence upgraded, through addition to T of concepts and techniques drawn from fuzzy logic. Examples: fuzzy control, fuzzy linear programming, fuzzy probability theory and fuzzy topology. 2.Linguistic variables and fuzzy if-then rules. The formalism of linguistic variables and fuzzy if-then rules is, in effect, a powerful modeling language which is widely used in applications of fuzzy logic. Basically, the formalism serves as a means of summarization and information compression through the use of granulation. 3.Cointensive precisiation. Fuzzy logic has a high power of cointensive precisiation. This power is needed for a formulation of cointensive definitions of scientific concepts and cointensive formalization of human-centric fields such as economics, linguistics, law, conflict resolution, psychology and medicine. 4.NL-Computation (computing with words). Fuzzy logic serves as a basis for NL-Computation, that is, computation with information described in natural language. NL-Computation is of direct relevance to mechanization of natural language understanding and computation with imprecise probabilities. More generally, NL-Computation is needed for dealing with second-order uncertainty, that is, uncertainty about uncertainty, or uncertainty^2 for short. In summary, progression from bivalent logic to fuzzy logic is a significant positive step in the evolution of science. In large measure, the real-world is a fuzzy world. To deal with fuzzy reality what is needed is fuzzy logic. In coming years, fuzzy logic is likely to grow in visibility, importance and acceptance.
A Fast Approach for Overcomplete Sparse Decomposition Based on Smoothed ell 0 Norm In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined sparse component analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrar...
Objective Video Quality Assessment Methods: A Classification, Review, and Performance Comparison With the increasing demand for video-based applications, the reliable prediction of video quality has increased in importance. Numerous video quality assessment methods and metrics have been proposed over the past years with varying computational complexity and accuracy. In this paper, we introduce a classification scheme for full-reference and reduced-reference media-layer objective video quality assessment methods. Our classification scheme first classifies a method according to whether natural visual characteristics or perceptual (human visual system) characteristics are considered. We further subclassify natural visual characteristics methods into methods based on natural visual statistics or natural visual features. We subclassify perceptual characteristics methods into frequency or pixel-domain methods. According to our classification scheme, we comprehensively review and compare the media-layer objective video quality models for both standard resolution and high definition video. We find that the natural visual statistics based MultiScale-Structural SIMilarity index (MS-SSIM), the natural visual feature based Video Quality Metric (VQM), and the perceptual spatio-temporal frequency-domain based MOtion-based Video Integrity Evaluation (MOVIE) index give the best performance for the LIVE Video Quality Database.
Spectral Methods for Parameterized Matrix Equations. We apply polynomial approximation methods-known in the numerical PDEs context as spectral methods-to approximate the vector-valued function that satisfies a linear system of equations where the matrix and the right-hand side depend on a parameter. We derive both an interpolatory pseudospectral method and a residual-minimizing Galerkin method, and we show how each can be interpreted as solving a truncated infinite system of equations; the difference between the two methods lies in where the truncation occurs. Using classical theory, we derive asymptotic error estimates related to the region of analyticity of the solution, and we present a practical residual error estimate. We verify the results with two numerical examples.
Soft computing based on interval valued fuzzy ANP-A novel methodology Analytic Network Process (ANP) is the multi-criteria decision making (MCDM) tool which takes into account such a complex relationship among parameters. In this paper, we develop the interval-valued fuzzy ANP (IVF-ANP) to solve MCDM problems since it allows interdependent influences specified in the model and generalizes on the supermatrix approach. Furthermore, performance rating values as well as the weights of criteria are linguistics terms which can be expressed in IVF numbers (IVFN). Moreover, we present a novel methodology proposed for solving MCDM problems. In proposed methodology by applying IVF-ANP method determined weights of criteria. Then, we appraise the performance of alternatives against criteria via linguistic variables which are expressed as triangular interval-valued fuzzy numbers. Afterward, by utilizing IVF-weights which are obtained from IVF-ANP and applying IVF-TOPSIS and IVF-VIKOR methods achieve final rank for alternatives. Additionally, to demonstrate the procedural implementation of the proposed model and its effectiveness, we apply it on a case study regarding to assessment the performance of property responsibility insurance companies.
1.026594
0.02875
0.025
0.018126
0.008928
0.001344
0
0
0
0
0
0
0
0
Improved bounds for a deterministic sublinear-time Sparse Fourier Algorithm Abstract—This paper,improves,on,the best-known,runtime and,measurement,bounds,for a recently proposed,Deterministic sublinear-time Sparse Fourier Transform,algorithm,(hereafter called DSFT). In [1], [2], it is shown that DSFT can exactly re- construct the Fourier transform (FT) of an N-bandwidth signal f, consisting of B N non-zero frequencies, using O(B, · polylog(N)) f-samples. DSFT works,by taking advantage,of natural,aliasing phenomena,to hash a frequency- sparse signal’s FT information,modulo,O(B·polylog(N)) pairwise coprime,numbers,via O(B · polylog(N)) small Discrete Fourier Transforms. Number,theoretic arguments,then,guarantee,the original DFT frequencies/coe,cients can,be recovered,via the Chinese Remainder,Theorem. DSFT’s usage,of primes,makes its runtime,and,signal sample,requirements,highly dependent on the sizes of sums,and,products,of small primes. Our new bounds,utilize analytic number,theoretic techniques,to generate improved (asymptotic) bounds for DSFT. As a result, we provide better bounds,for the sampling,complexity/number,of low-rate analog-to-digital converters (ADCs) required to deterministically recover,frequency-sparse,wideband,signals via DSFT in signal processing applications [3], [4]. Index Terms—Fourier transforms, Discrete Fourier trans- forms, Algorithms, Number theory, Signal processing
Near-Optimal Sparse Recovery in the L1 Norm We consider the *approximate sparse recovery problem*, where the goal is to (approximately) recover a high-dimensional vector x from Rn from its lower-dimensional *sketch* Ax from Rm.Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an approximation x' of x such that the L1 approximation error | |x-x'| | is close to minimum of | |x-x*| | over all vectors x* with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years.Many solutions to this problem have been discovered, achieving different trade-offs between various attributes, such as the sketch length, encoding and recovery times.In this paper we provide a sparse recovery scheme which achieves close to optimal performance on virtually all attributes. In particular, this is the first recovery scheme that guarantees k log(n/k) sketch length, and near-linear n log (n/k) recovery time *simultaneously*. It also features low encoding and update times, and is noise-resilient.
Explicit constructions for compressed sensing of sparse signals Over the recent years, a new approach for obtaining a succinct approximate representation of n-dimensional vectors (or signals) has been discovered. For any signal x, the succinct representation of x is equal to Ax, where A is a carefully chosen R x n real matrix, R &Lt; n. Often, A is chosen at random from some distribution over R x n matrices. The vector Ax is often refered to as the measurement vector or a sketch of x. Although the dimension of Ax is much shorter than of x, it contains plenty of useful information about x.
One sketch for all: fast algorithms for compressed sensing Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and process it to extract the information. The new approach acquires a small number of nonadaptive linear measurements of the signal and uses sophisticated algorithms to determine its information content. Emerging technologies can compute these general linear measurements of a signal at unit cost per measurement. This paper exhibits a randomized measurement ensemble and a signal reconstruction algorithm that satisfy four requirements: 1. The measurement ensemble succeeds for all signals, with high probability over the random choices in its construction. 2. The number of measurements of the signal is optimal, except for a factor polylogarithmic in the signal length. 3. The running time of the algorithm is polynomial in the amount of information in the signal and polylogarithmic in the signal length. 4. The recovery algorithm offers the strongest possible type of error guarantee. Moreover, it is a fully polynomial approximation scheme with respect to this type of error bound. Emerging applications demand this level of performance. Yet no otheralgorithm in the literature simultaneously achieves all four of these desiderata.
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For compressible signals, the running time is just O(N log2 N), where N is the length of the signal. In applications, most signals of interest contain scant information relative to their ambient di- mension, but the classical approach to signal acquisition ignores this fact. We usually collect a complete representation of the target signal and process this representation to sieve out the ac- tionable information. Then we discard the rest. Contemplating this ugly inefficiency, one might ask if it is possible instead to acquire compressive samples. In other words, is there some type of measurement that automatically winnows out the information from a signal? Incredibly, the answer is sometimes yes. Compressive sampling refers to the idea that, for certain types of signals, a small number of nonadaptive samples carries sufficient information to approximate the signal well. Research in this area has two major components: Sampling: How many samples are necessary to reconstruct signals to a specified precision? What type of samples? How can these sampling schemes be implemented in practice? Reconstruction: Given the compressive samples, what algorithms can efficiently construct a signal approximation?
Simple and practical algorithm for sparse Fourier transform We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, and learning theory. We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal processing, notably Gaussian and Dolph-Chebyshev filters. Unlike the typical approach to this problem, our algorithm is not iterative. That is, instead of estimating \"large\" coefficients, subtracting them and recursing on the reminder, it identifies and estimates the k largest coefficients in \"one shot\", in a manner akin to sketching/streaming algorithms. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice.
Information-Theoretic Bounds On Sparsity Recovery In The High-Dimensional And Noisy Setting The problem of recovering the sparsity pattern of a fixed but unknown vector beta* epsilon R-p based on a set of n noisy observations arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. Of interest are conditions on the model dimension p, the sparsity index s (number of non-zero entries in beta*), and the number of observations n that are necessary and/or sufficient to ensure asymptotically perfect recovery of the sparsity pattern. This paper focuses on the information-theoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on measurement vectors drawn from the standard Gaussian ensemble, we derive both a set of sufficient conditions for asymptotically perfect recovery using the optimal decoder, as well as a set of necessary conditions that any decoder must satisfy for perfect recovery. This analysis of optimal decoding limits complements our previous work [19] on thresholds for the behavior of l(1)-constrained quadratic programming for Gaussian measurement ensembles.
Iterative Hard Thresholding for Compressed Sensing Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper)•It gives near-optimal error guarantees.•It is robust to observation noise.•It succeeds with a minimum number of observations.•It can be used with any sampling operator for which the operator and its adjoint can be computed.•The memory requirement is linear in the problem size.•Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint.•It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal.•Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
Random Projections for Manifold Learning We propose a novel method for linear dimensionality reduction of manifold mod- eled data. First, we show that with a small number M of random projectionsof sample points in RN belonging to an unknown K-dimensional Euclidean mani- fold, the intrinsic dimension (ID) of the sample set can be estimated to high accu- racy. Second, we rigorously prove that using only this set of random projections, we can estimate the structure of the underlying manifold. In both cases, the num- ber of random projections required is linear in K and logarithmic in N , meaning that K < M ≪ N. To handle practical situations, we develop a greedy algorithm to estimate the smallest size of the projection space requir ed to perform manifold learning. Our method is particularly relevant in distribut ed sensing systems and leads to significant potential savings in data acquisition, storage and transmission costs.
Computing with words in decision making: foundations, trends and prospects Computing with Words (CW) methodology has been used in several different environments to narrow the differences between human reasoning and computing. As Decision Making is a typical human mental process, it seems natural to apply the CW methodology in order to create and enrich decision models in which the information that is provided and manipulated has a qualitative nature. In this paper we make a review of the developments of CW in decision making. We begin with an overview of the CW methodology and we explore different linguistic computational models that have been applied to the decision making field. Then we present an historical perspective of CW in decision making by examining the pioneer papers in the field along with its most recent applications. Finally, some current trends, open questions and prospects in the topic are pointed out.
Experimental study of intelligent controllers under uncertainty using type-1 and type-2 fuzzy logic Uncertainty is an inherent part in control systems used in real world applications. The use of new methods for handling incomplete information is of fundamental importance. Type-1 fuzzy sets used in conventional fuzzy systems cannot fully handle the uncertainties present in control systems. Type-2 fuzzy sets that are used in type-2 fuzzy systems can handle such uncertainties in a better way because they provide us with more parameters and more design degrees of freedom. This paper deals with the design of control systems using type-2 fuzzy logic for minimizing the effects of uncertainty produced by the instrumentation elements, environmental noise, etc. The experimental results are divided in two classes, in the first class, simulations of a feedback control system for a non-linear plant using type-1 and type-2 fuzzy logic controllers are presented; a comparative analysis of the systems' response in both cases was performed, with and without the presence of uncertainty. For the second class, a non-linear identification problem for time-series prediction is presented. Based on the experimental results the conclusion is that the best results are obtained using type-2 fuzzy systems.
Design for Variability in DSM Technologies Process-induced parameter variations cause performance actuations and are an important consideration in the design of high performance digital ICs. Until recently, it was sufficient to model die-to-die shifts in device (active) and wire (passive) parameters, leading to a natural worst-case design methodology [1, 2].In the deep-submicron era, however, within-die variations in these same device and wire parameters become just as important. In fact, current integrated circuits are large enough that variations within the die are as large as variations from die-to-die. Furthermore, while die-to-die shifts are substantially independent of the design, within-die variations are profoundly influenced by the detailed physical implementation of the IC.This changes the fundamental view of process variability from something that is imposed on the design by the fabrication process to something that is co-generated between the design and the process. This paper starts by examining the sources and historical trends in device and wire variability, distinguishing between inter-die and intra-die variations, and proposes techniques for design for variability (DOV) in the presence of both types of variations.
Automatic discovery of algorithms for multi-agent systems Automatic algorithm generation for large-scale distributed systems is one of the holy grails of artificial intelligence and agent-based modeling. It has direct applicability in future engineered (embedded) systems, such as mesh networks of sensors and actuators where there is a high need to harness their capabilities via algorithms that have good scalability characteristics. NetLogo has been extensively used as a teaching and research tool by computer scientists, for example for exploring distributed algorithms. Inventing such an algorithm usually involves a tedious reasoning process for each individual idea. In this paper, we report preliminary results in our effort to push the boundary of the discovery process even further, by replacing the classical approach with a guided search strategy that makes use of genetic programming targeting the NetLogo simulator. The effort moves from a manual model implementation to an automated discovery process. The only activity that is required is the implementation of primitives and the configuration of the tool-chain. In this paper, we explore the capabilities of our framework by re-inventing five well-known distributed algorithms.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.112121
0.01916
0.012464
0.007898
0.003105
0.000333
0.000063
0.000004
0
0
0
0
0
0
Algebraic multigrid for stationary and time-dependent partial differential equations with stochastic coefficients We consider the numerical solution of time-dependent partial differential equations (PDEs) with random coefficients. A spectral approach, called stochastic finite element method, is used to compute the statistical characteristics of the solution. This method transforms a stochastic PDE into a coupled system of deterministic equations by means of a Galerkin projection onto a generalized polynomial chaos. An algebraic multigrid (AMG) method is presented to solve the algebraic systems that result after discretization of this coupled system. High-order time integration schemes of an implicit Runge-Kutta type and spatial discretization on unstructured finite element meshes are considered. The convergence properties of the AMG method are demonstrated by a convergence analysis and by numerical tests. Copyright (c) 2008 John Wiley & Sons, Ltd.
Multigrid and sparse-grid schemes for elliptic control problems with random coefficients. A multigrid and sparse-grid computational approach to solving nonlinear elliptic optimal control problems with random coefficients is presented. The proposed scheme combines multigrid methods with sparse-grids collocation techniques. Within this framework the influence of randomness of problem’s coefficients on the control provided by the optimal control theory is investigated. Numerical results of computation of stochastic optimal control solutions and formulation of mean control functions are presented.
Preconditioning Stochastic Galerkin Saddle Point Systems Mixed finite element discretizations of deterministic second-order elliptic PDEs lead to saddle point systems for which the study of iterative solvers and preconditioners is mature. Galerkin approximation of solutions of stochastic second-order elliptic PDEs, which couple standard mixed finite element discretizations in physical space with global polynomial approximation on a probability space, also give rise to linear systems with familiar saddle point structure. For stochastically nonlinear problems, the solution of such systems presents a serious computational challenge. The blocks are sums of Kronecker products of pairs of matrices associated with two distinct discretizations, and the systems are large, reflecting the curse of dimensionality inherent in most stochastic approximation schemes. Moreover, for the problems considered herein, the leading blocks of the saddle point matrices are block-dense, and the cost of a matrix vector product is nontrivial. We implement a stochastic Galerkin discretization for the steady-state diffusion problem written as a mixed first-order system. The diffusion coefficient is assumed to be a lognormal random field, approximated via a nonlinear function of a finite number of Gaussian random variables. We study the resulting saddle point systems and investigate the efficiency of block-diagonal preconditioners of Schur complement and augmented type for use with the minimal residual method (MINRES). By introducing so-called Kronecker product preconditioners, we improve the robustness of cheap, mean-based preconditioners with respect to the statistical properties of the stochastically nonlinear diffusion coefficients.
Stochastic Galerkin Matrices We investigate the structural, spectral, and sparsity properties of Stochastic Galerkin matrices as they arise in the discretization of linear differential equations with random coefficient functions. These matrices are characterized as the Galerkin representation of polynomial multiplication operators. In particular, it is shown that the global Galerkin matrix associated with complete polynomials cannot be diagonalized in the stochastically linear case.
Spectral Methods for Parameterized Matrix Equations. We apply polynomial approximation methods-known in the numerical PDEs context as spectral methods-to approximate the vector-valued function that satisfies a linear system of equations where the matrix and the right-hand side depend on a parameter. We derive both an interpolatory pseudospectral method and a residual-minimizing Galerkin method, and we show how each can be interpreted as solving a truncated infinite system of equations; the difference between the two methods lies in where the truncation occurs. Using classical theory, we derive asymptotic error estimates related to the region of analyticity of the solution, and we present a practical residual error estimate. We verify the results with two numerical examples.
Multi-level Monte Carlo Finite Element method for elliptic PDEs with stochastic coefficients In Monte Carlo methods quadrupling the sample size halves the error. In simulations of stochastic partial differential equations (SPDEs), the total work is the sample size times the solution cost of an instance of the partial differential equation. A Multi-level Monte Carlo method is introduced which allows, in certain cases, to reduce the overall work to that of the discretization of one instance of the deterministic PDE. The model problem is an elliptic equation with stochastic coefficients. Multi-level Monte Carlo errors and work estimates are given both for the mean of the solutions and for higher moments. The overall complexity of computing mean fields as well as k-point correlations of the random solution is proved to be of log-linear complexity in the number of unknowns of a single Multi-level solve of the deterministic elliptic problem. Numerical examples complete the theoretical analysis.
Solving PDEs with Intrepid Intrepid is a Trilinos package for advanced discretizations of Partial Differential Equations PDEs. The package provides a comprehensive set of tools for local, cell-based construction of a wide range of numerical methods for PDEs. This paper describes the mathematical ideas and software design principles incorporated in the package. We also provide representative examples showcasing the use of Intrepid both in the context of numerical PDEs and the more general context of data analysis.
The exponent of discrepancy of sparse grids is at least 2.1933 We study bounds on the exponents of sparse grids for L 2‐discrepancy and average case d‐dimensional integration with respect to the Wiener sheet measure. Our main result is that the minimal exponent of sparse grids for these problems is bounded from below by 2.1933. This shows that sparse grids provide a rather poor exponent since, due to Wasilkowski and Woźniakowski [16], the minimal exponent of L 2‐discrepancy of arbitrary point sets is at most 1.4778. The proof of the latter, however, is non‐constructive. The best known constructive upper bound is still obtained by a particular sparse grid and equal to 2.4526....
Low-Rank Tensor Krylov Subspace Methods for Parametrized Linear Systems We consider linear systems $A(\alpha) x(\alpha) = b(\alpha)$ depending on possibly many parameters $\alpha = (\alpha_1,\ldots,\alpha_p)$. Solving these systems simultaneously for a standard discretization of the parameter range would require a computational effort growing drastically with the number of parameters. We show that a much lower computational effort can be achieved for sufficiently smooth parameter dependencies. For this purpose, computational methods are developed that benefit from the fact that $x(\alpha)$ can be well approximated by a tensor of low rank. In particular, low-rank tensor variants of short-recurrence Krylov subspace methods are presented. Numerical experiments for deterministic PDEs with parametrized coefficients and stochastic elliptic PDEs demonstrate the effectiveness of our approach.
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
Efficient Iterative Time Preconditioners for Harmonic Balance RF Circuit Simulation Efficient iterative time preconditioners for Krylov-based harmonic balance circuit simulators are proposed. Some numerical experiments assess their performance relative to the well-known block-diagonal frequency preconditioner and the previously proposed time preconditioned.
Fuzzy Logic From The Logical Point of View Fuzzy logic is analyzed from the point of view of formal logic; the underlying calculi and their proporties are surveyed and applications are discussed.
Learning parameters of linear models in compressed parameter space We present a novel method of reducing the training time by learning parameters of a model at hand in compressed parameter space. In compressed parameter space the parameters of the model are represented by fewer parameters, and hence training can be faster. After training, the parameters of the model can be generated from the parameters in compressed parameter space. We show that for supervised learning, learning the parameters of a model in compressed parameter space is equivalent to learning parameters of the model in compressed input space. We have applied our method to a supervised learning domain and show that a solution can be obtained at much faster speed than learning in uncompressed parameter space. For reinforcement learning, we show empirically that searching directly the parameters of a policy in compressed parameter space accelerates learning.
An Interval-Valued Intuitionistic Fuzzy Rough Set Model Given a widespread interest in rough sets as being applied to various tasks of data analysis it is not surprising at all that we have witnessed a wave of further generalizations and algorithmic enhancements of this original concept. This paper proposes an interval-valued intuitionistic fuzzy rough model by means of integrating the classical Pawlak rough set theory with the interval-valued intuitionistic fuzzy set theory. Firstly, some concepts and properties of interval-valued intuitionistic fuzzy set and interval-valued intuitionistic fuzzy relation are introduced. Secondly, a pair of lower and upper interval-valued intuitionistic fuzzy rough approximation operators induced from an interval-valued intuitionistic fuzzy relation is defined, and some properties of approximation operators are investigated in detail. Furthermore, by introducing cut sets of interval-valued intuitionistic fuzzy sets, classical representations of interval-valued intuitionistic fuzzy rough approximation operators are presented. Finally, the connections between special interval-valued intuitionistic fuzzy relations and interval-valued intuitionistic fuzzy rough approximation operators are constructed, and the relationships of this model and the others rough set models are also examined.
1.043206
0.014809
0.0122
0.010075
0.008004
0.001543
0.000222
0.000051
0.000002
0
0
0
0
0
A decision support model for group decision making with hesitant multiplicative preference relations The hesitant fuzzy preference relation (HFPR) was recently introduced by Zhu and Xu to allow the decision makers (DMs) to offer several possible preference values over two alternatives. In this paper, we use an asymmetrical scale (Saaty's 1-9 scale) to express the decision makers' preference information instead of the symmetrical scale that is found in a HFPR, and we introduce a new preference structure that is known as the hesitant multiplicative preference relation (HMPR). Each element of the HMPR is characterized by several possible preference values from the closed interval [1/9, 9]; thus, it can model decision makers' hesitation more accurately and reflect people's intuitions more objectively. Furthermore, we develop a consistency- and consensus-based decision support model for group decision making (GDM) with hesitant multiplicative preference relations (HMPRs). In this model, an individual consistency index is defined to measure the degree of deviation between an HMPR and its consistent HMPR, and a consistency improving process is designed to convert an unacceptably consistent HMPR to an acceptably consistent HMPR. Additionally, a group consensus index is introduced to measure the degree of deviation between the individual HMPRs and the group HMPR, and a consensus-reaching process is provided to help the individual HMPRs achieve a predefined consensus level. Finally, a numerical example is provided to demonstrate the practicality and effectiveness of the developed model.
A decision support model for group decision making with hesitant fuzzy preference relations In this paper, we develop a decision support model that simultaneously addresses the consistency and consensus for group decision making based on hesitant fuzzy preference relations. The concepts of a consistency index and a consensus index are introduced. Two convergent algorithms are proposed in the developed support model. The first algorithm is used to convert an unacceptable hesitant fuzzy preference relation to an acceptable one. The second algorithm is utilized to help the group reach a predefined consensus level. The main characteristic of the developed model is that it makes each hesitant fuzzy preference relation maintain acceptable consistency when the predefined consensus level is achieved. Several illustrative examples are given to illustrate the effectiveness and practicality of the developed model.
Group decision making with 2-tuple intuitionistic fuzzy linguistic preference relations The aim of this paper is to propose a new type of preference relation, the intuitionistic fuzzy linguistic preference relation (IFLPR). Taking as base the 2-tuple fuzzy linguistic representation model, we introduce the definition of the IFLPR, and its transitivity properties. We present an approach to group decision making based on IFLPRs and incomplete-IFLPRs, respectively. The score function and accuracy function are applied to the ranking and selection of alternatives. Finally, we give an example of IFLPRs in group decision making, and a comparative of the exploitation of the IFLPR with the exploitation of the traditional fuzzy linguistic preference relations.
On multi-granular fuzzy linguistic modeling in group decision making problems: A systematic review and future trends. The multi-granular fuzzy linguistic modeling allows the use of several linguistic term sets in fuzzy linguistic modeling. This is quite useful when the problem involves several people with different knowledge levels since they could describe each item with different precision and they could need more than one linguistic term set. Multi-granular fuzzy linguistic modeling has been frequently used in group decision making field due to its capability of allowing each expert to express his/her preferences using his/her own linguistic term set. The aim of this research is to provide insights about the evolution of multi-granular fuzzy linguistic modeling approaches during the last years and discuss their drawbacks and advantages. A systematic literature review is proposed to achieve this goal. Additionally, some possible approaches that could improve the current multi-granular linguistic methodologies are presented.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
Fuzzy set methods for qualitative and natural language oriented simulation The author discusses the approach of using fuzzy set theory to create a formal way of viewing the qualitative simulation of models whose states, inputs, outputs, and parameters are uncertain. Simulation was performed using detailed and accurate models, and it was shown how input and output trajectories could reflect linguistic (or qualitative) changes in a system. Uncertain variables are encoded using triangular fuzzy numbers, and three distinct fuzzy simulation approaches (Monte Carlo, correlated and uncorrelated) are defined. The methods discussed are also valid for discrete event simulation; experiments have been performed on the fuzzy simulation of a single server queuing model. In addition, an existing C-based simulation toolkit, SimPack, was augmented to include the capabilities for modeling using fuzzy arithmetic and linguistic association, and a C++ class definition was coded for fuzzy number types
A training algorithm for optimal margin classifiers A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
A Bayesian approach to image expansion for improved definition. Accurate image expansion is important in many areas of image analysis. Common methods of expansion, such as linear and spline techniques, tend to smooth the image data at edge regions. This paper introduces a method for nonlinear image expansion which preserves the discontinuities of the original image, producing an expanded image with improved definition. The maximum a posteriori (MAP) estimation techniques that are proposed for noise-free and noisy images result in the optimization of convex functionals. The expanded images produced from these methods will be shown to be aesthetically and quantitatively superior to images expanded by the standard methods of replication, linear interpolation, and cubic B-spline expansion.
A simple Cooperative diversity method based on network path selection Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this "best" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.
Is Gauss Quadrature Better than Clenshaw-Curtis? We compare the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw-Curtis. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of $\log((z+1)/(z-1))$ in the complex plane. Gauss quadrature corresponds to Padé approximation at $z=\infty$. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at $z=\infty$ is only half as high, but which is nevertheless equally accurate near $[-1,1]$.
Directional relative position between objects in image processing: a comparison between fuzzy approaches The importance of describing relationships between objects has been highlighted in works in very different areas, including image understanding. Among these relationships, directional relative position relations are important since they provide an important information about the spatial arrangement of objects in the scene. Such concepts are rather ambiguous, they defy precise definitions, but human beings have a rather intuitive and common way of understanding and interpreting them. Therefore in this context, fuzzy methods are appropriate to provide consistent definitions that integrate both quantitative and qualitative knowledge, thus providing a computational representation and interpretation of imprecise spatial relations, expressed in a linguistic way, and including quantitative knowledge. Several fuzzy approaches have been developed in the literature, and the aim of this paper is to review and compare them according to their properties and according to the types of questions they seek to answer.
Fuzzy modeling of system behavior for risk and reliability analysis The main objective of the article is to permit the reliability analyst's/engineers/managers/practitioners to analyze the failure behavior of a system in a more consistent and logical manner. To this effect, the authors propose a methodological and structured framework, which makes use of both qualitative and quantitative techniques for risk and reliability analysis of the system. The framework has been applied to model and analyze a complex industrial system from a paper mill. In the quantitative framework, after developing the Petrinet model of the system, the fuzzy synthesis of failure and repair data (using fuzzy arithmetic operations) has been done. Various system parameters of managerial importance such as repair time, failure rate, mean time between failures, availability, and expected number of failures are computed to quantify the behavior in terms of fuzzy, crisp and defuzzified values. Further, to improve upon the reliability and maintainability characteristics of the system, in depth qualitative analysis of systems is carried out using failure mode and effect analysis (FMEA) by listing out all possible failure modes, their causes and effect on system performance. To address the limitations of traditional FMEA method based on risky priority number score, a risk ranking approach based on fuzzy and Grey relational analysis is proposed to prioritize failure causes.
Implementing Competitive Learning in a Quantum System Ideas from quantum computation are applied to the field of neural networks to produce competitive learning in a quantum system. The resulting quantum competitive learner has a prototype storage capacity that is exponentially greater than that of its classical counterpart. Further, empirical results from simulation of the quantum competitive learning system on real-world data sets demonstrate the quantum system's potential for excellent performance.
1.2
0.2
0.05
0.028571
0.000075
0
0
0
0
0
0
0
0
0
A multi-dimensional view of QoE: the ARCU model Understanding and modeling the wide range of influence factors that impact end user Quality of Experience (QoE) and go beyond traditional Quality of Service (QoS) parameters has become an important issue for service and network providers, in particular for new and emerging services. In this paper we present a generic ARCU (Application-Resource-Context-User) Model which categorizes influence factors into four multi-dimensional spaces. The model further maps points from these spaces to a multi-dimensional QoE space, representing both qualitative and quantitative QoE metrics. We discuss examples of applying the ARCU Model in practice, and identify key challenges.
OTT-ISP joint service management: A Customer Lifetime Value based approach. In this work, we propose a QoE-aware collaboration approach between Over-The-Top providers (OTT) and Internet Service Providers (ISP) based on the maximization of the profit by considering the user churn of Most Profitable Customers (MPCs), which are classified in terms of the Customer Lifetime Value (CLV). The contribution of this work is multifold. Firstly, we investigate the different perspectives of ISPs and OTTs regarding QoE management and why they should collaborate. Secondly, we investigate the current ongoing collaboration scenarios in the multimedia industry. Thirdly, we propose the QoE-aware collaboration framework based on the CLV, which includes the interfaces for information sharing between OTTs and ISPs and the use of Content Delivery Networks (CDN) and surrogate servers. Finally, we provide simulation results aiming at demonstrating the higher profit is achieved when collaboration is introduced, by engaging more MPCs with respect to current solutions.
Qualia: A Multilayer Solution For Qoe Passive Monitoring At The User Terminal This paper focuses on passive Quality of Experience (QoE) monitoring at user end devices as a necessary activity of the ISP (Internet Service Provider) for an effective quality-based service delivery. The contribution of the work is threefold. Firstly, we highlight the opportunities and challenges for the QoE monitoring of the Over-The-Top (OTT) applications while investigating the available interfaces for monitoring the deployed applications at the end-device. Secondly, we propose a multilayer passive QoE monitor for OTT applications at the user terminal with ISPs prospect. Five layers are considered: user profile, context, resource, application and network layers. Thirdly, we consider YouTube as a case study for OTT video streaming applications in our experiments for analyzing the impact of the monitoring cycle on the user end device resources, such as the battery, RAM and CPU utilization at end user device.
A survey on QoE-oriented wireless resources scheduling Future wireless systems are expected to provide a wide range of services to more and more users. Advanced scheduling strategies thus arise not only to perform efficient radio resource management, but also to provide fairness among the users. On the other hand, the users’ perceived quality, i.e., Quality of Experience (QoE), is becoming one of the main drivers within the schedulers design. In this context, this paper starts by providing a comprehension of what is QoE and an overview of the evolution of wireless scheduling techniques. Afterwards, a survey on the most recent QoE-based scheduling strategies for wireless systems is presented, highlighting the application/service of the different approaches reported in the literature, as well as the parameters that were taken into account for QoE optimization. Therefore, this paper aims at helping readers interested in learning the basic concepts of QoE-oriented wireless resources scheduling, as well as getting in touch with its current research frontier.
QOE-driven mobility management — Integrating the users' quality perception into network — Level decision making One of the most interesting applications of single-sided Quality of Experience (QoE) metrics is their use in improving the quality of the service, as perceived by the user. This can be done either at the application level-by for example changing the encoding in use, or the level of error correction applied-or at the network level, for instance by choosing a different DiffServ marking strategy, or changing the access network in use. To this end, the QoE metric used needs to be fast and accurate, and the context in which the application will be used needs to provide the opportunity for performing some sort of control operation. In this paper we describe the application of QoE estimations for VoIP to improve existing network-level mobility management solutions.
QoE-aware resource allocation for adaptive device-to-device video streaming The continuing advances in the storage and transmission abilities of user equipment have made it possible to share videos through device-to-device communications, which may be an efficient way to enhance the capacity of cellular network to provide wireless video services. In adaptive D2D video streaming, user experience is greatly influenced by the quality and fluency of the video, which is affected by the D2D link's quality. Additionally, the quality of D2D links relies on the resource allocation scheme for D2D pairs. To improve the quality of experience in D2D video streaming, we propose a QoE-aware resource allocation scheme for adaptive D2D video streaming. The QoE-aware resource allocation scheme has the ability to cater to the user experience in adaptive video steaming while considering the co-channel interference derived from frequency reuse in D2D communications. Specifically, a dynamic network scheduling problem is formulated and solved, with the objective of maximizing the video quality while maintaining the long-term stable performance of fluency during video playback. Extensive numerical results demonstrate that the proposed QoE-aware resource allocation scheme outperforms the QoE-oblivious resource allocation scheme.
Traffic monitoring and analysis for the optimization of a 3G network Recent years have recorded a surge of research activities on IP traffic monitoring, enabled by the availability of monitoring hardware and large-scale storage at accessible costs. More recently, passive monitoring has been applied to operational 3G networks. The passive observation of network traffic, coupled with advanced traffic-analysis methods, can be a powerful and cost-effective means to infer the network status and localize points of performance degradation without requiring complete access to all network elements. Furthermore, the availability of high-quality traces can be exploited to predict the load of the network under hypothetical conditions, variations of the actual network configuration at the capturing time. Both approaches can be useful for some engineering and reoptimization tasks that are commonly encountered in the lifetime of an operational 3G network. In abstract terms, the availability of high-quality traces can greatly empower the measurement-based optimization cycle, with human experts in the loop, thus driving an already operational 3G network toward improved performances. In this article we discuss the contribution that traffic monitoring and analysis (TMA) can provide to the optimization of an operational 3G network
Techniques for measuring quality of experience Quality of Experience (QoE) relates to how users perceive the quality of an application. To capture such a subjective measure, either by subjective tests or via objective tools, is an art on its own. Given the importance of measuring users’ satisfaction to service providers, research on QoE took flight in recent years. In this paper we present an overview of various techniques for measuring QoE, thereby mostly focusing on freely available tools and methodologies.
Overview of the Scalable Video Coding Extension of the H.264/AVC Standard With the introduction of the H.264/AVC video coding standard, significant improvements have recently been demonstrated in video compression capability. The Joint Video Team of the ITU-T VCEG and the ISO/IEC MPEG has now also standardized a Scalable Video Coding (SVC) extension of the H.264/AVC standard. SVC enables the transmission and decoding of partial bit streams to provide video services with lower temporal or spatial resolutions or reduced fidelity while retaining a reconstruction quality that is high relative to the rate of the partial bit streams. Hence, SVC provides functionalities such as graceful degradation in lossy transmission environments as well as bit rate, format, and power adaptation. These functionalities provide enhancements to transmission and storage applications. SVC has achieved significant improvements in coding efficiency with an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. This paper provides an overview of the basic concepts for extending H.264/AVC towards SVC. Moreover, the basic tools for providing temporal, spatial, and quality scalability are described in detail and experimentally analyzed regarding their efficiency and complexity.
Wireless Communication
Semantics of concurrent systems: a modular fixed-point trace approach A method for finding the set of processes generated by a concurrent system (the behaviour of a system) in modular way is presented. A system is decomposed into modules with behaviours assumed to be known and then the behaviours are successively put together giving finally the initial system behaviour. It is shown that there is much of freedom in choice of modules; in extreme case atoms of a system, i.e. subsystems containing only one resource, can be taken as modules; each atom has its behaviour defined a proiri. The basic operation used for composing behaviours is the synchronization operation defined in the paper. The fixed point method of describing sets of processes is extensively applied, with processes regarded as traces rather than strings of actions.
Data mining with sparse grids using simplicial basis functions Recently we presented a new approach [18] to the classification problem arising in data mining. It is based on the regularization network approach but, in contrast to other methods which employ ansatz functions associated to data points, we use a grid in the usually high-dimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [49]. Thus, only O(hn-1nd-1) instead of O(hn-d) grid points and unknowns are involved. Here d denotes the dimension of the feature space and hn = 2-n gives the mesh size. We use the sparse grid combination technique [28] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. In contrast to our former work, where d-linear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point.We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 10 dimensions. It turns out that our new method achieves correctness rates which are competitive to that of the best existing methods.
Fuzzy aesthetic semantics description and extraction for art image retrieval More and more digitized art images are accumulated and expanded in our daily life and techniques are needed to be established on how to organize and retrieve them. Though content-based image retrieval (CBIR) made great progress, current low-level visual information based retrieval technology in CBIR does not allow users to search images by high-level semantics for art image retrieval. We propose a fuzzy approach to describe and to extract the fuzzy aesthetic semantic feature of art images. Aiming to deal with the subjectivity and vagueness of human aesthetic perception, we utilize the linguistic variable to describe the image aesthetic semantics, so it becomes possible to depict images in linguistic expression such as 'very action'. Furthermore, we apply neural network approach to model the process of human aesthetic perception and to extract the fuzzy aesthetic semantic feature vector. The art image retrieval system based on fuzzy aesthetic semantic feature makes users more naturally search desired images by linguistic expression. We report extensive empirical studies based on a 5000-image set, and experimental results demonstrate that the proposed approach achieves excellent performance in terms of retrieval accuracy.
A Comparative Analysis Of Symbolic Linguistic Computational Models There are many situations in which problems deal with vague and imprecise information. In such cases, the information could be modelled by means of numbers, however, it doesn't seem logical to model imprecise information in a precise way. Therefore, the use of linguistic modelling have been used with successful results in these problems. The use of linguistic information involves the need of carrying out processes which operate with words, so called Computing with Words (CW). In the literature exists different linguistic approaches and different computational models. We focus in this contribution on the use of the fuzzy linguistic approach (FLA) to model vague and imprecise information, but more specifically we focus on their computational models paying more attention on different symbolic computational models that have been defined to deal with linguistic information. We are going to review their main features and make a comparative analysis among them.
1.021035
0.022631
0.022631
0.022
0.020396
0.012261
0.006885
0.002933
0.000317
0.000002
0
0
0
0
Application of fuzzy logic to approximate reasoning using linguistic synthesis This paper describes an application of fuzzy logic in designing controllers for industrial plants. A Fuzzy Logic is used to synthesise linguistic control protocol of a skilled operator. The method has been applied to pilot scale plants as well as in a practical industrial situation. The merits of this method in its usefulness to control engineering are discussed. This work also illustrates the potential for using fuzzy logic in modelling and decision making. An avenue for further work in this area is described where the need is to go beyond a purely descriptive approach and explore means by which a prescriptive system may be implemented.
Fuzzy logic control with genetic membership function parameters optimization for the output regulation of a servomechanism with nonlinear backlash The paper presents a hybrid architecture, which combines Type-1 or Type-2 fuzzy logic system (FLS) and genetic algorithms (GAs) for the optimization of the membership function (MF) parameters of FLS, in order to solve to the output regulation problem of a servomechanism with nonlinear backlash. In this approach, the fuzzy rule base is predesigned by experts of this problem. The proposed method is far from trivial because of nonminimum phase properties of the system. The simulation results illustrate the effectiveness of the optimized closed-loop system.
Building fuzzy inference systems with a new interval type-2 fuzzy logic toolbox This paper presents the development and design of a graphical user interface and a command line programming Toolbox for construction, edition and simulation of Interval Type-2 Fuzzy Inference Systems. The Interval Type- 2 Fuzzy Logic System (IT2FLS) Toolbox, is an environment for interval type-2 fuzzy logic inference system development. Tools that cover the different phases of the fuzzy system design process, from the initial description phase, to the final implementation phase, constitute the Toolbox. The Toolbox's best qualities are the capacity to develop complex systems and the flexibility that allows the user to extend the availability of functions for working with the use of type-2 fuzzy operators, linguistic variables, interval type-2 membership functions, defuzzification methods and the evaluation of Interval Type-2 Fuzzy Inference Systems.
Contrast of a fuzzy relation In this paper we address a key problem in many fields: how a structured data set can be analyzed in order to take into account the neighborhood of each individual datum. We propose representing the dataset as a fuzzy relation, associating a membership degree with each element of the relation. We then introduce the concept of interval-contrast, a means of aggregating information contained in the immediate neighborhood of each element of the fuzzy relation. The interval-contrast measures the range of membership degrees present in each neighborhood. We use interval-contrasts to define the necessary properties of a contrast measure, construct several different local contrast and total contrast measures that satisfy these properties, and compare our expressions to other definitions of contrast appearing in the literature. Our theoretical results can be applied to several different fields. In an Appendix A, we apply our contrast expressions to photographic images.
Looking for a good fuzzy system interpretability index: An experimental approach Interpretability is acknowledged as the main advantage of fuzzy systems and it should be given a main role in fuzzy modeling. Classical systems are viewed as black boxes because mathematical formulas set the mapping between inputs and outputs. On the contrary, fuzzy systems (if they are built regarding some constraints) can be seen as gray boxes in the sense that every element of the whole system can be checked and understood by a human being. Interpretability is essential for those applications with high human interaction, for instance decision support systems in fields like medicine, economics, etc. Since interpretability is not guaranteed by definition, a huge effort has been done to find out the basic constraints to be superimposed during the fuzzy modeling process. People talk a lot about interpretability but the real meaning is not clear. Understanding of fuzzy systems is a subjective task which strongly depends on the background (experience, preferences, and knowledge) of the person who makes the assessment. As a consequence, although there have been a few attempts to define interpretability indices, there is still not a universal index widely accepted. As part of this work, with the aim of evaluating the most used indices, an experimental analysis (in the form of a web poll) was carried out yielding some useful clues to keep in mind regarding interpretability assessment. Results extracted from the poll show the inherent subjectivity of the measure because we collected a huge diversity of answers completely different at first glance. However, it was possible to find out some interesting user profiles after comparing carefully all the answers. It can be concluded that defining a numerical index is not enough to get a widely accepted index. Moreover, it is necessary to define a fuzzy index easily adaptable to the context of each problem as well as to the user quality criteria.
Genetic tuning of fuzzy rule deep structures preserving interpretability and its interaction with fuzzy rule set reduction Tuning fuzzy rule-based systems for linguistic fuzzy modeling is an interesting and widely developed task. It involves adjusting some of the components of the knowledge base without completely redefining it. This contribution introduces a genetic tuning process for jointly fitting the fuzzy rule symbolic representations and the meaning of the involved membership functions. To adjust the former component, we propose the use of linguistic hedges to perform slight modifications keeping a good interpretability. To alter the latter component, two different approaches changing their basic parameters and using nonlinear scaling factors are proposed. As the accomplished experimental study shows, the good performance of our proposal mainly lies in the consideration of this tuning approach performed at two different levels of significance. The paper also analyzes the interaction of the proposed tuning method with a fuzzy rule set reduction process. A good interpretability-accuracy tradeoff is obtained combining both processes with a sequential scheme: first reducing the rule set and subsequently tuning the model.
Simulation of the bird age-structured population growth based on an interval type-2 fuzzy cellular structure In this paper an age-structured population growth model, based on a fuzzy cellular structure, is proposed. An age-structured population growth model enables a better description of population dynamics. In this paper, the dynamics of a particular bird species is considered. The dynamics is governed by the variation of natality, mortality and emigration rates, which in this work are evaluated using an interval type-2 fuzzy logic system. The use of type-2 fuzzy logic enables handling the effects caused by environment heterogeneity on the population. A set of fuzzy rules, about population growth, are derived from the interpretation of the ecological laws and the bird life cycle. The proposed model is formulated using discrete mathematics within the framework of a fuzzy cellular structure. The fuzzy cellular structure allows us to visualize the evolution of the population's spatial dynamics. The spatial distribution of the population has a deep effect on its dynamics. Moreover, the model enables not only to estimate the percentage of occupation on the cellular space when the species reaches its stable equilibrium level, but also to observe the occupation patterns.
A 2uFunction representation for non-uniform type-2 fuzzy sets: Theory and design The theoretical and computational complexities involved in non-uniform type-2 fuzzy sets (T2 FSs) are main obstacles to apply these sets to modeling high-order uncertainties. To reduce the complexities, this paper introduces a 2uFunction representation for T2 FSs. This representation captures the ideas from probability theory. By using this representation, any non-uniform T2 FS can be represented by a function of two uniform T2 FSs. In addition, any non-uniform T2 fuzzy logic system (FLS) can be indirectly designed by two uniform T2 FLSs. In particular, a 2uFunction-based trapezoid T2 FLS is designed. Then, it is applied to the problem of forecasting Mackey-Glass time series corrupted by two kinds of noise sources: (1) stationary and (2) non-stationary additive noises. Finally, the performance of the proposed FLS is compared by (1) other types of FLS: T1 FLS and uniform T2 FLS, and (2) other studies: ANFIS [54], IT2FNN-1 [54], T2SFLS [3] and Q-T2FLS [35]. Comparative results show that the proposed design has a low prediction error as well as is suitable for online applications.
Type-2 Fuzzy Logic: Theory and Applications Type-2 fuzzy sets are used for modeling uncertainty and imprecision in a better way. These type-2 fuzzy sets were originally presented by Zadeh in 1975 and are essentially "fuzzy fuzzy" sets where the fuzzy degree of membership is a type-1 fuzzy set. The new concepts were introduced by Mendel and Liang allowing the characterization of a type-2 fuzzy set with a superior membership function and an inferior membership function; these two functions can be represented each one by a type-1 fuzzy set membership function. The interval between these two functions represents the footprint of uncertainty (FOU), which is used to characterize a type-2 fuzzy set.
Approximate Reasoning in the Modeling of Consensus in Group Decisions In this paper we propose an approach to consensus reaching based on linguistically expressed individual opinions and on so-called opinion changing aversion. We operate within this basic context: there is a group of experts which must choose a preferred alternative from a finite set of admissible ones according to several criteria. Each expert is called upon evaluate linguistically the alternatives in terms of their performance with respect to each criterion. The task of the experts is to reach some agreement during a consensus reaching process directed by a third person called the moderator. The experts are expected subsequently to change their testimonies until sufficient agreement (consensus) has been reached. The measure of consensus depends on a function estimated for each expert according to his/her aversion to opinion change.
Comparative analysis of SAW and TOPSIS based on interval-valued fuzzy sets: Discussions on score functions and weight constraints Interval-valued fuzzy sets involve more uncertainties than ordinary fuzzy sets and can be used to capture imprecise or uncertain decision information in fields that require multiple-criteria decision analysis (MCDA). This paper takes the simple additive weighting (SAW) method and the technique for order preference by similarity to an ideal solution (TOPSIS) as the main structure to deal with interval-valued fuzzy evaluation information. Using an interval-valued fuzzy framework, this paper presents SAW-based and TOPSIS-based MCDA methods and conducts a comparative study through computational experiments. Comprehensive discussions have been made on the influence of score functions and weight constraints, where the score function represents an aggregated effect of positive and negative evaluations in performance ratings and the weight constraint consists of the unbiased condition, positivity bias, and negativity bias. The correlations and contradiction rates obtained in the experiments suggest that evident similarities exist between the interval-valued fuzzy SAW and TOPSIS rankings.
The Logarithmic Nature of QoE and the Role of the Weber-Fechner Law in QoE Assessment The Weber-Fechner Law (WFL) is an important principle in psychophysics which describes the relationship be- tween the magnitude of a physical stimulus and its perceived intensity. With the sensory system of the human body, in many cases this dependency turns out to be of logarithmic nature. Re- cent quantitative QoE research shows that in several different scenarios a similar logarithmic relationship can be observed be- tween the size of a certain QoS parameter of the communication system and the resulting QoE on the user side as observed during appropriate user trials. In this paper, we discuss this surprising link in more detail. After a brief survey on the background of the WFL, we review its basic implications with respect to related work on QoE assessment for VoIP, most notably the recently published IQX hypothesis, before we present results of our own trials on QoE assessment for mobile broadband scenarios which confirm this dependency also for data services. Finally, we point out some conclusions and directions for further research.
Simple and practical algorithm for sparse Fourier transform We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, and learning theory. We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal processing, notably Gaussian and Dolph-Chebyshev filters. Unlike the typical approach to this problem, our algorithm is not iterative. That is, instead of estimating \"large\" coefficients, subtracting them and recursing on the reminder, it identifies and estimates the k largest coefficients in \"one shot\", in a manner akin to sketching/streaming algorithms. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice.
Neuroinformatics I: Fuzzy Neural Networks of More-Equal-Less Logic (Static) This article analyzes the possibilities of neural nets composed of neurons - the summa- tors of continuously varied impulse frequencies characterized by non-linearity N , when informa- tional operations of fuzzy logic are performed. According to the facts of neurobiological research the neurons are divided into stellate and pyramidal ones, and their functional-static characteris- tics are presented. The operations performed by stellate neurons are characterized as qualitative (not quantitative) informational estimations "more", "less", "equal", i.e., they function according to "more-equal-less" (M-E-L) logic. Pyramidal neurons with suppressing entries perform algebraic signal operations and as a result of them the output signals are controlled by means of universal logical function "NON disjunction" (Pierce arrow or Dagger function). It is demonstrated how ste- llate and pyramidal neurons can be used to synthesize the neural nets functioning in parallel and realizing all logical and elementary algebraic functions as well as to perform the conditional con- trolled operations of information processing. Such neural nets functioning by principles of M-E-L and suppression logic can perform signals' classification, filtration and other informational proce- dures by non-quantitative assessment, and their informational possibilities (the a mount of qualita- tive states), depending on the number n of analyzing elements-neurons, are proportional to n! or even to (2n) ∗ n!, i.e., much bigger than the possibilities of traditional informational automats func- tioning by binary principle. In summary it is stated that neural nets are informational subsystems of parallel functioning and analogical neurocomputers of hybrid action.
1.020052
0.016
0.016
0.014494
0.011614
0.002506
0.000847
0.000304
0.000048
0.000009
0
0
0
0
The Impact of Network and Protocol Heterogeneity on Real-Time Application QoS We evaluate the impact of network, and protocol heterogeneity on real-time application performance. We focus on TCP and UDP supportive role, also in the context of network stability and fairness. We reach several conclusions on the specific impact of wireless links, MPEG traffic friendliness, and TCP version efficiency. Beyond that, we also reach an unexpected result: UDP traffic is occasionally worse than TCP traffic when the right performance metric is used.
Future Multimedia Networking, Second International Workshop, FMN 2009, Coimbra, Portugal, June 22-23, 2009. Proceedings
Qos modeling for performance evaluation over evolved 3g networks The end-to-end Quality of Service (QoS) must be ensured along the whole network in order to achieve the desired service quality for the end user. In hybrid wired-wireless networks, the wireless subsystem is usually the bottleneck of the whole network. The aim of our work is to obtain a QoS model to evaluate the performance of data services over evolved 3G radio links. This paper focuses on the protocols and mechanisms at the radio interface, which is a variable-rate multiuser and multichannel subsystem. Proposed QoS models for such scenario include selective retransmissions, adaptive modulation and coding, as well as a cross-layer mechanism that allows the link layer to adapt itself to a dynamically changing channel state. The proposed model is based on a bottom-up approach, which considers the cumulative performance degradation along protocol layers and predicts the performance of different services in specific environments. Numerical parameters at the physical layer resemble those proposed for 3GPP Long Term Evolution (LTE). By means of both analytical (wherever possible) and semi-analytical methods, streaming service quality indicators have been evaluated at different radio layers.
End-to-End QoS for Video Delivery Over Wireless Internet Providing end-to-end quality of service (QoS) support is essential for video delivery over the next-generation wireless Internet. We address several key elements in the end-to-end QoS support, including scalable video representation, network-aware end system, and network QoS provisioning. There are generally two approaches in QoS support: the network-centric and the end-system centric solutions. T...
Traffic data repository at the WIDE project It becomes increasingly important for both network researchers and operators to know the trend of network traffic and to find anomaly in their network traffic. This paper describes an on-going effort within the WIDE project to collect a set of free tools to build a traffic data repository containing detailed information of our backbone traffic. Traffic traces are collected by tcpdump and, after removing privacy information, the traces are made open to the public. We review the issues on user privacy, and then, the tools used to build the WIDE traffic repository. We will report the current status and findings in the early stage of our IPv6 deployment.
Video quality estimator for wireless mesh networks As Wireless Mesh Networks (WMNs) have been increasingly deployed, where users can share, create and access videos with different characteristics, the need for new quality estimator mechanisms has become important because operators want to control the quality of video delivery and optimize their network resources, while increasing the user satisfaction. However, the development of in-service Quality of Experience (QoE) estimation schemes for Internet videos (e.g., real-time streaming and gaming) with different complexities, motions, Group of Picture (GoP) sizes and contents remains a significant challenge and is crucial for the success of wireless multimedia systems. To address this challenge, we propose a real-time quality estimator approach, HyQoE, for real-time multimedia applications. The performance evaluation in a WMN scenario demonstrates the high accuracy of HyQoE in estimating the Mean Opinion Score (MOS). Moreover, the results highlight the lack of performance of the well-known objective methods and the Pseudo-Subjective Quality Assessment (PSQA) approach.
An effective de-interlacing technique using two types of motion information In this paper, we propose a new de-interlacing algorithm using two types of motion information, i.e., the block-based and the pixel-based motion information. In the proposed scheme, block-wise motion is first calculated using the frame differences. Then, it is refined by the pixel-based motion information. The results of hardware implementation show that the proposed scheme using block-wise motion is more robust to noise than the conventional schemes using pixel-wise motion. Also, the proposed spatial interpolation provides a good visual performance in the case of moving diagonal edges.
An autonomic architecture for optimizing QoE in multimedia access networks The recent emergence of multimedia services, such as Broadcast TV and Video on Demand over traditional twisted pair access networks, has complicated the network management in order to guarantee a decent Quality of Experience (QoE) for each user. The huge amount of services and the wide variety of service specifics require a QoE management on a per-user and per-service basis. This complexity can be tackled through the design of an autonomic QoE management architecture. In this article, the Knowledge Plane is presented as an autonomic layer that optimizes the QoE in multimedia access networks from the service originator to the user. It autonomously detects network problems, e.g. a congested link, bit errors on a link, etc. and determines an appropriate corrective action, e.g. switching to a lower bit rate video, adding an appropriate number of FEC packets, etc. The generic Knowledge Plane architecture is discussed, incorporating the triple design goal of an autonomic, generic and scalable architecture. The viability of an implementation using neural networks is investigated, by comparing it with a reasoner based on analytical equations. Performance results are presented of both reasoners in terms of both QoS and QoE metrics.
On QoE monitoring and E2E service assurance in 4G wireless networks. From the users and service providers point of view, the upcoming 4G wireless (WiMAX and LTE) networks are expected to deliver high performance sensitive applications like live mobile TV, video calling, mobile video services, etc. The 4G networks are intended to provide an accurate service view of customer-perceived service quality - their “Quality of Experience” or QoE. Delivering high QoE depends...
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Compressed sensing of analog signals in shift-invariant spaces A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for low-rate sampling of continuous-time sparse signals in shift-invariant (SI) spaces, generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distingnishing feature of our results is that in contrast to standard CS, which treats finite-length vectors, we consider sampling of analog signals for which no underlying finite-dimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.
Stable and efficient reduction of large, multiport RC networks by pole analysis via congruence transformations A novel technique is presented which employs Pole Analysis via Congruence Transformations (PACT) to reduce RC networks in a well-conditioned manner. Pole analysis is shown to be more efficient than Padé approximations when the number of network ports is large, and congruence transforma- tions preserve the passivity (and thus absolute stability) of the networks. Networks are represented by admittance matrices throughout the analysis, and this representation simplifies interfacing the reduced networks with circuit simulators as well as facilitates realization of the reduced networks using RC elements. A prototype SPICE-in, SPICE-out, network reduc- tion CAD tool called RCFIT is detailed, and examples are pre- sented which demonstrate the accuracy and efficiency of the PACT algorithm. 1. INTRODUCTION The trends in industry are to design CMOS VLSI circuits with smaller devices, higher clock speeds, lower power consumption, and more integration of analog and digital circuits; and these increase the importance of modeling layout-dependant parasitics. Resistance and capacitance of interconnect lines can delay trans- mitted signals. Supply line resistance and capacitance, in combina- tion with package inductance, can lead to large variations of the supply voltage during digital switching and degrade circuit perfor- mance. In mixed-signal designs, the current injected into the sub- strate beneath digital devices may create significant noise in analog components through fluctuations of the local substrate volt- age. In order for designers to accurately assess on-chip layout- dependent parasitics before fabrication, macromodels are extracted from a layout and included in the netlist used for circuit simulation. Very often, these effects are modeled solely with
Fuzzy independence and extended conditional probability In many applications, the use of Bayesian probability theory is problematical. Information needed to feasibility calculate is unavailable. There are different methodologies for dealing with this problem, e.g., maximal entropy and Dempster-Shafer Theory. If one can make independence assumptions, many of the problems disappear, and in fact, this is often the method of choice even when it is obviously incorrect. The notion of independence is a 0-1 concept, which implies that human guesses about its validity will not lead to robust systems. In this paper, we propose a fuzzy formulation of this concept. It should lend itself to probabilistic updating formulas by allowing heuristic estimation of the ''degree of independence.'' We show how this can be applied to compute a new notion of conditional probability (we call this ''extended conditional probability''). Given information, one typically has the choice of full conditioning (standard dependence) or ignoring the information (standard independence). We list some desiderata for the extension of this to allowing degree of conditioning. We then show how our formulation of degree of independence leads to a formula fulfilling these desiderata. After describing this formula, we show how this compares with other possible formulations of parameterized independence. In particular, we compare it to a linear interpolant, a higher power of a linear interpolant, and to a notion originally presented by Hummel and Manevitz [Tenth Int. Joint Conf. on Artificial Intelligence, 1987]. Interestingly, it turns out that a transformation of the Hummel-Manevitz method and our ''fuzzy'' method are close approximations of each other. Two examples illustrate how fuzzy independence and extended conditional probability might be applied. The first shows how linguistic probabilities result from treating fuzzy independence as a linguistic variable. The second is an industrial example of troubleshooting on the shop floor.
Stochastic approximation learning for mixtures of multivariate elliptical distributions Most of the current approaches to mixture modeling consider mixture components from a few families of probability distributions, in particular from the Gaussian family. The reasons of these preferences can be traced to their training algorithms, typically versions of the Expectation-Maximization (EM) method. The re-estimation equations needed by this method become very complex as the mixture components depart from the simplest cases. Here we propose to use a stochastic approximation method for probabilistic mixture learning. Under this method it is straightforward to train mixtures composed by a wide range of mixture components from different families. Hence, it is a flexible alternative for mixture learning. Experimental results are presented to show the probability density and missing value estimation capabilities of our proposal.
1.221815
0.221815
0.221815
0.088727
0.004267
0.002667
0.000821
0.000301
0.000085
0
0
0
0
0
Real-time constrained TCP-compatible rate control for video over the Internet This paper describes a rate control algorithm that captures not only the behavior of TCP's congestion control avoidance mechanism but also the delay constraints of real-time streams. Building upon the TFRC protocol , a new protocol has been designed for estimating the bandwidth prediction model parameters. Making use of RTP and RTCP, this protocol allows to better take into account the multimedia flows characteristics (variable packet size, delay ...). Given the current channel state estimated by the above protocol, encoder and decoder buffers states as well as delay constraints of the real-time video source are translated into encoder rate constraints. This global rate control model, coupled with an H.263+ loss resilient video compression algorithm, has been extensively experimented with on various Internet links. The experiments clearly demonstrate the benefits of 1/ the new protocol used for estimating the bandwidth prediction model parameters, adapted to multimedia flows characteristics, and of 2/ the global rate control model encompassing source buffers and end-to-end delay characteristics. The overall system leads to reduce significantly the source timeouts, hence to minimize the expected distortion, for a comparable usage of the TCP-compatible predicted bandwidth.
A reliable decentralized Peer-to-Peer Video-on-Demand system using helpers. We propose a decentralized Peer-to-Peer (P2P) Videoon-Demand (VoD) system. The traditional data center architecture is eliminated and is replaced by a large set of distributed, dynamic and individually unreliable helpers. The system leverages the strength of numbers to effect reliable cooperative content distribution, removing the drawbacks of conventional data center architectures including complexity of maintenance, high power consumption and lack of scalability. In the proposed VoD system, users and helper "servelets" cooperate in a P2P manner to deliver the video stream. Helpers are preloaded with only a small fraction of parity coded video data packets, and form into swarms each serving partial video content. The total number of helpers is optimized to guarantee high quality of service. In cases of helper churn, the helper network is also able to regenerate itself by users and helpers working cooperatively to repair the lost data, which yields a highly reliable system. Analysis and simulation results corroborate the feasibility and effectiveness of the proposed architecture.
MulTFRC: providing weighted fairness for multimediaapplications (and others too!) When data transfers to or from a host happen in parallel, users do not always consider them to have the same importance. Ideally, a transport protocol should therefore allow its users to manipulate the fairness among flows in an almost arbitrary fashion. Since data transfers can also include real-time media streams which need to keep delay | and hence buffers | small, the protocol should also have a smooth sending rate. In an effort to satisfy the above requirements, we present MulTFRC, a congestion control mechanism which is based on the TCP-friendly Rate Control (TFRC) protocol. It emulates the behavior of a number of TFRC flows while maintaining a smooth sending rate. Our simulations and a real-life test demonstrate that MulTFRC performs significantly better than its competitors, potentially making it applicable in a broader range of settings than what TFRC is normally associated with.
Effects Of Mgs Fragmentation, Slice Mode And Extraction Strategies On The Performance Of Svc With Medium-Grained Scalability This paper presents a comparison of a wide set of MGS fragmentation configurations of SVC in terms of their PSNR performance, with the slice mode on or off, using multiple extraction methods. We also propose a priority-based hierarchical extraction method which outperforms other extraction schemes for most MGS configurations. Experimental results show that splitting the MGS layer into more than five fragments, when the slice mode is on, may result in noticeable decrease in the average PSNR. It is also observed that for videos with large key frame enhancement NAL units, MGS fragmentation and/or slice mode have positive impact on the PSNR of the extracted video at low bitrates. While using slice mode without MGS fragmentation may improve the PSNR performance at low rates, it may result in uneven video quality within frames due to varying quality of slices. Therefore, we recommend combined use of up to five MGS fragments and slice mode, especially for low bitrate video applications.
Joint Texture And Depth Map Video Coding Based On The Scalable Extension Of H.264/Avc Depth-Image-Based Rendering (DIBR) is widely used for view synthesis in 3D video applications. Compared with traditional 2D video applications, both the texture video and its associated depth map are required for transmission in a communication system that supports DIBR. To efficiently utilize limited bandwidth, coding algorithms, e.g. the Advanced Video Coding (H.264/AVC) standard, can be adopted to compress the depth map using the 4:0:0 chroma sampling format. However, when the correlation between texture video and depth map is exploited, the compression efficiency may be improved compared with encoding them independently using H.264/AVC. A new encoder algorithm which employs Scalable Video Coding (SVC), the scalable extension of H.264/AVC, to compress the texture video and its associated depth map is proposed in this paper. Experimental results show that the proposed algorithm can provide up to 0.97 dB gain for the coded depth maps, compared with the simulcast scheme, wherein texture video and depth map are coded independently by H.264/AVC.
Tribler: A Social-Based Peer-To-Peer System Most current peer-to-peer (P2P) file-sharing systems treat their users as anonymous, unrelated entities, and completely disregard any social relationships between them. However, social phenomena such as friendship and the existence of communities of users with similar tastes or interests may well be exploited in such systems in order to increase their usability and performance. In this paper we present a novel social-based P2P file-sharing paradigm that exploits social phenomena by maintaining social networks and using these in content discovery, content recommendation, and downloading. Based on this paradigm's main concepts such as taste buddies and friends, we have designed and implemented the TRIBLER P2P file-sharing system as a set of extensions to BitTorrent. We present and discuss the design of TRIBLER, and we show evidence that TRIBLER enables fast content discovery and recommendation at a low additional overhead, and a significant improvement in download performance. Copyright (c) 2007 John Wiley & Sons, Ltd.
Multimedia streaming via TCP: An analytic performance study TCP is widely used in commercial multimedia streaming systems, with recent measurement studies indicating that a significant fraction of Internet streaming media is currently delivered over HTTP/TCP. These observations motivate us to develop analytic performance models to systematically investigate the performance of TCP for both live and stored-media streaming. We validate our models via ns simulations and experiments conducted over the Internet. Our models provide guidelines indicating the circumstances under which TCP streaming leads to satisfactory performance, showing, for example, that TCP generally provides good streaming performance when the achievable TCP throughput is roughly twice the media bitrate, with only a few seconds of startup delay.
QoE of YouTube Video Streaming for Current Internet Transport Protocols Video streaming currently dominates global Internet traffic and will be of even increasing importance in the future. In this paper we assess the impact of the underlying transport protocol on the user perceived quality for video streaming using YouTube as example. In particular, we investigate whether UDP or TCP fits better for Video-on-Demand delivery from the end user's perspective, when the video is transmitted over a bottleneck link. For UDP based streaming, the bottleneck link results in spatial and temporal video artifacts, decreasing the video quality. In contrast, in the case of TCP based streaming, the displayed content itself is not disturbed but playback suffers from stalling due to rebuffering. The results of subjective user studies for both scenarios are analyzed in order to assess the transport protocol influences on Quality of Experience of YouTube. To this end, application-level measurements are conducted for YouTube streaming over a network bottleneck in order to develop models for realistic stalling patterns. Furthermore, mapping functions are derived that accurately describe the relationship between network-level impairments and QoE for both protocols.
ENDE: An End-to-end Network Delay Emulator Tool for Multimedia Protocol Development Multimedia applications and protocols are constantly being developed to run over the Internet. A new protocol or application after being developed has to be tested on the real Internet or simulated on a testbed for debugging and performance evaluation. In this paper, we present a novel tool, ENDE, that can emulate end-to-end delays between two hosts without requiring access to the second host. The tool enables the user to test new multimedia protocols realistically on a single machine. In a delay-observing mode, ENDE can generate accurate traces of one-way delays between two hosts on the network. In a delay-impacting mode, ENDE can be used to simulate the functioning of a protocol or an application as if it were running on the network. We will show that ENDE allows accurate estimation of one-way transit times and hence can be used even when the forward and reverse paths are asymmetric between the two hosts. Experimental results are also presented to show that ENDE is fairly accurate in the delay-impacting mode.
Economics of logarithmic Quality-of-Experience in communication networks Utility functions, describing the value of a good or a resource from an end user's point of view, are widely used as an important ingredient for all sorts of microeconomic models. In the context of resource allocation in communication networks, a logarithmic form of utility usually serves as the standard example due to its simplicity and mathematical tractability, with the additional nice property that the corresponding social welfare maximization enforces proportional fairness of allocated band-widths. In this paper we argue that recent results from Quality of Experience (QoE) research indeed provide additional justification for such a choice, and discuss several examples. Especially for Voice-over-IP and mobile broadband scenarios, there is in-creasing evidence that user experience follows logarithmic laws similar to the Weber-Fechner Law which is well-known from the area of psychophysics. Eventually, this logarithmic behavior will allow inferring a rather close linkage between subjective user experience, overall social welfare and general fairness issues.
Proactive recovery in a Byzantine-fault-tolerant system This paper describes an asynchronous state-machine replication system that tolerates Byzantine faults, which can be caused by malicious attacks or software errors. Our system is the first to recover Byzantine-faulty replicas proactively and it performs well because it uses symmetric rather than public-key cryptography for authentication. The recovery mechanism allows us to tolerate any number of faults over the lifetime of the system provided fewer than 1 3 of the replicas become faulty within a window of vulnerability that is small under normal conditions. The window may increase under a denial-of-service attack but we can detect and respond to such attacks. The paper presents results of experiments showing that overall performance is good and that even a small window of vulnerability has little impact on service latency.
High-Order Collocation Methods for Differential Equations with Random Inputs Recently there has been a growing interest in designing efficient methods for the solution of ordinary/partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling methods. However, when the governing equations take complicated forms, numerical implementations of stochastic Galerkin methods can become nontrivial and care is needed to design robust and efficient solvers for the resulting equations. On the other hand, the traditional sampling methods, e.g., Monte Carlo methods, are straightforward to implement, but they do not offer convergence as fast as stochastic Galerkin methods. In this paper, a high-order stochastic collocation approach is proposed. Similar to stochastic Galerkin methods, the collocation methods take advantage of an assumption of smoothness of the solution in random space to achieve fast convergence. However, the numerical implementation of stochastic collocation is trivial, as it requires only repetitive runs of an existing deterministic solver, similar to Monte Carlo methods. The computational cost of the collocation methods depends on the choice of the collocation points, and we present several feasible constructions. One particular choice, based on sparse grids, depends weakly on the dimensionality of the random space and is more suitable for highly accurate computations of practical applications with large dimensional random inputs. Numerical examples are presented to demonstrate the accuracy and efficiency of the stochastic collocation methods.
Performance analysis of partial segmented compressed sampling Recently, a segmented AIC (S-AIC) structure that measures the analog signal by K parallel branches of mixers and integrators (BMIs) was proposed by Taheri and Vorobyov (2011). Each branch is characterized by a random sampling waveform and implements integration in several continuous and non-overlapping time segments. By permuting the subsamples collected by each segment at different BMIs, more than K samples can be generated. To reduce the complexity of the S-AIC, in this paper we propose a partial segmented AIC (PS-AIC) structure, where K branches are divided into J groups and each group, acting as an independent S-AIC, only works within a partial period that is non-overlapping in time. Our structure is inspired by the recent validation that block diagonal matrices satisfy the restricted isometry property (RIP). Using this fact, we prove that the equivalent measurement matrix of the PS-AIC satisfies the RIP when the number of samples exceeds a certain threshold. Furthermore, the recovery performance of the proposed scheme is developed, where the analytical results show its performance gain when compared with the conventional AIC. Simulations verify the effectiveness of the PS-AIC and the validity of our theoretical results.
A compressive sensing-based reconstruction approach to network traffic Traffic matrix in a network describes the end-to-end network traffic which embodies the network-level status of communication networks from origin to destination nodes. It is an important input parameter of network traffic engineering and is very crucial for network operators. However, it is significantly difficult to obtain the accurate end-to-end network traffic. And thus obtaining traffic matrix precisely is a challenge for operators and researchers. This paper studies the reconstruction method of the end-to-end network traffic based on compressing sensing. A detailed method is proposed to select a set of origin-destination flows to measure at first. Then a reconstruction model is built via these measured origin-destination flows. And a purely data-driven reconstruction algorithm is presented. Finally, we use traffic data from the real backbone network to verify our approach proposed in this paper.
1.102967
0.100781
0.100781
0.0505
0.050406
0.026547
0.009627
0.000311
0.00007
0
0
0
0
0
Random Projections of Smooth Manifolds We propose a new approach for nonadaptive dimensionality reduction of manifold-modeled data, demonstrating that a small number of random linear projections can preserve key information about a manifold-modeled signal. We center our analysis on the effect of a random linear projection operator Φ:ℝ N →ℝM , MN, on a smooth well-conditioned K-dimensional submanifold ℳ⊂ℝN . As our main theoretical contribution, we establish a sufficient number M of random projections to guarantee that, with high probability, all pairwise Euclidean and geodesic distances between points on ℳ are well preserved under the mapping Φ. Our results bear strong resemblance to the emerging theory of Compressed Sensing (CS), in which sparse signals can be recovered from small numbers of random linear measurements. As in CS, the random measurements we propose can be used to recover the original data in ℝN . Moreover, like the fundamental bound in CS, our requisite M is linear in the “information level” K and logarithmic in the ambient dimension N; we also identify a logarithmic dependence on the volume and conditioning of the manifold. In addition to recovering faithful approximations to manifold-modeled signals, however, the random projections we propose can also be used to discern key properties about the manifold. We discuss connections and contrasts with existing techniques in manifold learning, a setting where dimensionality reducing mappings are typically nonlinear and constructed adaptively from a set of sampled training data.
A Theoretical Analysis of Joint Manifolds The emergence of low-cost sensor architectures for diverse modalities has made it possible to deploy sensor arrays that capture a single event from a large number of vantage points and using multiple modalities. In many scenarios, these sensors acquire very high-dimensional data such as audio signals, images, and video. To cope with such high-dimensional data, we typi- cally rely on low-dimensional models. Manifold models provide a particularly powerful model that captures the structure of high-dimensional data when it is governed by a low-dimensional set of parameters. However, these models do not typically take into account dependencies among multiple sensors. We thus propose a new joint manifold framework for data ensembles that exploits such dependencies. We show that simple algorithms can exploit the joint manifold structure to improve their performance on standard signal processing applications. Addition- ally, recent results concerning dimensionality reduction for manifolds enable us to formulate a network-scalable data compression scheme that uses random projections of the sensed data. This scheme efficiently fuses the data from all sensors throu gh the addition of such projections, regardless of the data modalities and dimensions.
A simple proof that random matrices are democratic The recently introduced theory of compressive sensing (CS) enables the reconstruction of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be significantly smaller than the ambient dimension of the signal and yet preserve the significant signal information. Interestingly, it can be shown that random measurement schemes provide a near-optimal encoding in terms of the required number of measurements. In this report, we explore another relatively unexplored, though often alluded to, advantage of using random matrices to acquire CS measurements. Specifically, we show that random matrices are democractic, meaning that each measurement carries roughly the same amount of signal information. We demonstrate that by slightly increasing the number of measurements, the system is robust to the loss of a small number of arbitrary measurements. In addition, we draw connections to oversampling and demonstrate stability from the loss of significantly more measurements.
Random Projections Of Signal Manifolds Random projections have recently found a surprising niche in signal processing. The key revelation is that the relevant structure in a signal can be preserved when that signal is projected onto a small number of random basis functions. Recent work has exploited this fact under the rubric of Compressed Sensing (CS): signals that are sparse in some basis can be recovered from small numbers of random linear projections. In many cases, however, we may have a more specific low-dimensional model for signals in which the signal class forms a nonlinear manifold in R N. This paper provides preliminary theoretical and experimental evidence that manifold-based signal structure can be preserved using small numbers of random projections. The key theoretical motivation comes from VVhitney's Embedding Theorem, which states that a K-dimensional manifold can be embedded in R2K+1. We exam ine the potential applications of this fact. In particular, we consider the task of recovering a manifold-modeled signal from a small number of random projections. Thanks to our more specific model, we can recover certain signals using far fewer measurements than would be required using sparsity-driven CS techniques.
A multiscale framework for Compressive Sensing of video Compressive Sensing (CS) allows the highly efficient acquisition of many signals that could be difficult to capture or encode using conventional methods. From a relatively small number of random measurements, a high-dimensional signal can be recovered if it has a sparse or near-sparse representation in a basis known to the decoder. In this paper, we consider the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems. In standard video compression, motion compensation and estimation techniques have led to improved sparse representations that are more easily compressible; we adapt these techniques for the problem of CS recovery. Using a coarse-to-fine reconstruction algorithm, we alternate between the tasks of motion estimation and motion-compensated wavelet-domain signal recovery. We demonstrate that our algorithm allows the recovery of video sequences from fewer measurements than either frame-by-frame or inter-frame difference recovery methods.
Dequantizing Compressed Sensing: When Oversampling and Non-Gaussian Constraints Combine In this paper, we study the problem of recovering sparse or compressible signals from uniformly quantized measurements. We present a new class of convex optimization programs, or decoders, coined Basis Pursuit DeQuantizer of moment p (BPDQp), that model the quantization distortion more faithfully than the commonly used Basis Pursuit DeNoise (BPDN) program. Our decoders proceed by minimizing the sparsity of the signal to be reconstructed subject to a data-fidelity constraint expressed in the ℓp-norm of the residual error for 2 ≤ p ≤ ∞. We show theoretically that, (i) the reconstruction error of these new decoders is bounded if the sensing matrix satisfies an extended Restricted Isometry Property involving the Iρ norm, and (ii), for Gaussian random matrices and uniformly quantized measurements, BPDQp performance exceeds that of BPDN by dividing the reconstruction error due to quantization by √(p + 1). This last effect happens with high probability when the number of measurements exceeds a value growing with p, i.e., in an oversampled situation compared to what is commonly required by BPDN = BPDQ2. To demonstrate the theoretical power of BPDQp, we report numerical simulations on signal and image reconstruction problems.
Compressive wireless sensing General Terms Compressive Sampling is an emerging theory that is based on the fact that a relatively small number of random pro-jections of a signal can contain most of its salient informa-tion. In this paper, we introduce the concept of Compressive Wireless Sensing for sensor networks in which a fusion center retrieves signal field information from an ensemble of spa-tially distributed sensor nodes. Energy and bandwidth are scarce resources in sensor networks and the relevant metrics of interest in our context are 1) the latency involved in in-formation retrieval; and 2) the associated power-distortion trade-o. It is generally recognized that given su cient prior knowledge about the sensed data (e. g., statistical character-ization, homogeneity etc. ), there exist schemes that have very favorable power-distortion-latency trade-o s. We pro-pose a distributed matched source-channel communication scheme, based in part on recent results in compressive sam-pling theory, for estimation of sensed data at the fusion cen-ter and analyze, as a function of number of sensor nodes, the trade-o s between power, distortion and latency. Compres-sive wireless sensing is a universal scheme in the sense that it requires no prior knowledge about the sensed data. This universality, however, comes at the cost of optimality (in terms of a less favorable power-distortion-latency trade-o ) and we quantify this cost relative to the case when su cient prior information about the sensed data is assumed.
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques, and reconstruc- tion accuracy of the same order as that of LP optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean squared error of the reconstruction is upper bounded by constant multiples of the measurement and signal perturbation energies.
Block-Sparse Signals: Uncertainty Relations And Efficient Recovery We consider efficient methods for the recovery of block-sparse signals-i.e., sparse signals that have nonzero entries occurring in clusters-from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed l(2)/l(1)-optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
Bayesian compressive sensing and projection optimization This paper introduces a new problem for which machine-learning tools may make an impact. The problem considered is termed "compressive sensing", in which a real signal of dimension N is measured accurately based on K real measurements. This is achieved under the assumption that the underlying signal has a sparse representation in some basis (e.g., wavelets). In this paper we demonstrate how techniques developed in machine learning, specifically sparse Bayesian regression and active learning, may be leveraged to this new problem. We also point out future research directions in compressive sensing of interest to the machine-learning community.
Approximation and estimation bounds for artificial neural networks For a common class of artificial neural networks, the mean integrated squared error between the estimated network and a target function f is shown to be bounded by O \left ( {{C^2_f}\over n}\right ) + O\left ({nd\over N}{\rm log}\ N\right), where n is the number of nodes, d is the input dimension of the function, N is the number of training observations, and Cf is the first absolute moment of the Fourier magnitude distribution of f. The two contributions to this total risk are the approximation error and the estimation error. Approximation error refers to the distance between the target function and the closest neural network function of a given architecture and estimation error refers to the distance between this ideal network function and an estimated network function. With n ˜ Cf(N/(dlog N))1/2 nodes, the order of the bound on the mean integrated squared error is optimized to be O(Cf((d/N)log N)1/2). The bound demonstrates surprisingly favorable properties of network estimation compared to traditional series and nonparametric curve estimation techniques in the case that d is moderately large. Similar bounds are obtained when the number of nodes n is not preselected as a function of Cf (which is generally not known a priori), but rather the number of nodes is optimized from the observed data by the use of a complexity regularization or minimum description length criterion. The analysis involves Fourier techniques for the approximation error, metric entropy considerations for the estimation error, and a calculation of the index of resolvability of minimum complexity estimation of the family of networks.
Resource Estimation Algorithm Under Impreciseness Using Inclusion Scheduling
Considering the decision maker's attitudinal character to solve multi-criteria decision-making problems in an intuitionistic fuzzy environment Decision makers (DMs) are usually faced with selecting the most suitable alternative from a group of candidates based on a set of criteria. A number of approaches have been proposed to solve such multi-criteria decision-making (MCDM) problems. Intuitionistic fuzzy sets (IFSs) are useful for dealing with the vagueness and uncertainty in a decision-making process because the DM's indeterminacy in the evaluations can be expressed in the decision model. This paper proposes two score functions for evaluating the suitability of an alternative across all criteria in an intuitionistic fuzzy environment, in which the DM's attitudinal character is considered to determine the portion of indeterminacy that will be included in the assessments of alternatives. The DM's attitudinal character is also applied to determine each criterion's weight for the aggregation using the ordered weighted averaging operator. By considering the DM's attitudinal character, the proposed approach is flexible in the decision-making process and applicable to real cases. In addition, the proposed approach can be easily extended to deal with problems in an interval-valued intuitionistic fuzzy environment. Numerical examples are used to illustrate applicability, and comparisons with existing approaches are conducted to demonstrate the feasibility of the proposed approach.
Schopenhauer's Prolegomenon to Fuzziness “Prolegomenon” means something said in advance of something else. In this study, we posit that part of the work by Arthur Schopenhauer (1788–1860) can be thought of as a prolegomenon to the existing concept of “fuzziness.” His epistemic framework offers a comprehensive and surprisingly modern framework to study individual decision making and suggests a bridgeway from the Kantian program into the concept of fuzziness, which may have had its second prolegomenon in the work by Frege, Russell, Wittgenstein, Peirce and Black. In this context, Zadeh's seminal contribution can be regarded as the logical consequence of the Kant-Schopenhauer representation framework.
1.014866
0.010526
0.010526
0.005394
0.004098
0.002111
0.000866
0.000318
0.000051
0.000004
0
0
0
0
An emergency decision making method based on the multiplicative consistency of probabilistic linguistic preference relations As the evolution of emergencies is often uncertain, it may lead to multiple emergency scenarios. According to the characteristics of emergency management, this paper proposes an emergency decision support method by using the probabilistic linguistic preference relations (PLPRs) whose elements are the pairwise comparisons of alternatives given by the decision-makers (DMs) in the form of probabilistic linguistic term sets (PLTSs). As the decision data are limited, it is difficult for the DMs to provide exact occurrence probabilities of all possible emergency scenarios. Thus, we propose a probability correction method by using the computer-aided tool named the case-based reasoning (CBR) to obtain more accurate and reasonable occurrence probabilities of the probabilistic linguistic elements (PLEs). Then, we introduce a multiplicative consistency index to judge whether a PLPR is consistent or not. Afterwards, an acceptable multiplicative consistency-based emergency decision support method is proposed to get more reliable results. Furthermore, a case study about the emergency decision making in a petrochemical plant fire accident is conducted to illustrate the proposed method. Finally, some comparative analyses are performed to demonstrate the feasibility and effectiveness of the proposed method.
A proportional linguistic distribution based model for multiple attribute decision making under linguistic uncertainty. This paper aims at developing a proportional fuzzy linguistic distribution model for multiple attribute decision making problems, which is based on the nature of symbolic linguistic model combined with distributed assessments. Particularly, in this model the evaluation on attributes of alternatives is represented by distributions on the linguistic term set used as an instrument for assessment. In addition, this new model is also able to deal with incomplete linguistic assessments so that it allows evaluators to avoid the dilemma of having to supply complete assessments when not available. As for aggregation and ranking problems of proportional fuzzy linguistic distributions, the extension of conventional aggregation operators as well as the expected utility in this proportional fuzzy linguistic distribution model are also examined. Finally, the proposed model will be illustrated with an application in product evaluation.
Customizing Semantics for Individuals With Attitudinal HFLTS Possibility Distributions. Linguistic computational techniques based on hesitant fuzzy linguistic term set (HFLTS) have been swiftly advanced on various fronts over the past five years. However, one critical issue in the existing theoretical development is that modeling possibility distribution based semantics involves a relatively strict constraint that linguistic terms are uniformly distributed across an HFLTS. Releasing ...
A Fuzzy Linguistic Methodology to Deal With Unbalanced Linguistic Term Sets Many real problems dealing with qualitative aspects use linguistic approaches to assess such aspects. In most of these problems, a uniform and symmetrical distribution of the linguistic term sets for linguistic modeling is assumed. However, there exist problems whose assessments need to be represented by means of unbalanced linguistic term sets, i.e., using term sets that are not uniformly and symmetrically distributed. The use of linguistic variables implies processes of computing with words (CW). Different computational approaches can be found in the literature to accomplish those processes. The 2-tuple fuzzy linguistic representation introduces a computational model that allows the possibility of dealing with linguistic terms in a precise way whenever the linguistic term set is uniformly and symmetrically distributed. In this paper, we present a fuzzy linguistic methodology in order to deal with unbalanced linguistic term sets. To do so, we first develop a representation model for unbalanced linguistic information that uses the concept of linguistic hierarchy as representation basis and afterwards an unbalanced linguistic computational model that uses the 2-tuple fuzzy linguistic computational model to accomplish processes of CW with unbalanced term sets in a precise way and without loss of information.
Hesitant Fuzzy Linguistic Term Sets for Decision Making Dealing with uncertainty is always a challenging problem, and different tools have been proposed to deal with it. Recently, a new model that is based on hesitant fuzzy sets has been presented to manage situations in which experts hesitate between several values to assess an indicator, alternative, variable, etc. Hesitant fuzzy sets suit the modeling of quantitative settings; however, similar situations may occur in qualitative settings so that experts think of several possible linguistic values or richer expressions than a single term for an indicator, alternative, variable, etc. In this paper, the concept of a hesitant fuzzy linguistic term set is introduced to provide a linguistic and computational basis to increase the richness of linguistic elicitation based on the fuzzy linguistic approach and the use of context-free grammars by using comparative terms. Then, a multicriteria linguistic decision-making model is presented in which experts provide their assessments by eliciting linguistic expressions. This decision model manages such linguistic expressions by means of its representation using hesitant fuzzy linguistic term sets.
Dual Hesitant Fuzzy Sets. In recent decades, several types of sets, such as fuzzy sets, interval-valued fuzzy sets, intuitionistic fuzzy sets, interval-valued intuitionistic fuzzy sets, type 2 fuzzy sets, type n fuzzy sets, and hesitant fuzzy sets, have been introduced and investigated widely. In this paper, we propose dual hesitant fuzzy sets (DHFSs), which encompass fuzzy sets, intuitionistic fuzzy sets, hesitant fuzzy sets, and fuzzy multisets as special cases. Then we investigate the basic operations and properties of DHFSs. We also discuss the relationships among the sets mentioned above, use a notion of nested interval to reflect their common ground, then propose an extension principle of DHFSs. Additionally, we give an example to illustrate the application of DHFSs in group forecasting.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
From Computing with Numbers to Computing with Words - From Manipulation of Measurements to Manipulation of Perceptions Interest in issues relating to consciousness has grown markedly during the last several years. And yet, nobody can claim that consciousness is a well-understood concept that lends itself to precise analysis. It may be argued that, as a concept, consciousness is much too complex to fit into the conceptual structure of existing theories based on Aristotelian logic and probability theory. An approach suggested in this paper links consciousness to perceptions and perceptions to their descriptors in a natural language. In this way, those aspects of consciousness which relate to reasoning and concept formation are linked to what is referred to as the methodology of computing with words (CW). Computing, in its usual sense, is centered on manipulation of numbers and symbols. In contrast, computing with words, or CW for short, is a methodology in which the objects of computation are words and propositions drawn from a natural language (e.g., small, large, far, heavy, not very likely, the price of gas is low and declining, Berkeley is near San Francisco, it is very unlikely that there will be a significant increase in the price of oil in the near future, etc.). Computing with words is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements and any computations. Familiar examples of such tasks are parking a car, driving in heavy traffic, playing golf, riding a bicycle, understanding speech, and summarizing a story. Underlying this remarkable capability is the brain's crucial ability to manipulate perceptions--perceptions of distance, size, weight, color, speed, time, direction, force, number, truth, likelihood, and other characteristics of physical and mental objects. Manipulation of perceptions plays a key role in human recognition, decision and execution processes. As a methodology, computing with words provides a foundation for a computational theory of perceptions: a theory which may have an important bearing on how humans make--and machines might make--perception-based rational decisions in an environment of imprecision, uncertainty, and partial truth. A basic difference between perceptions and measurements is that, in general, measurements are crisp, whereas perceptions are fuzzy. One of the fundamental aims of science has been and continues to be that of progressing from perceptions to measurements. Pursuit of this aim has led to brilliant successes. We have sent men to the moon; we can build computers that are capable of performing billions of computations per second; we have constructed telescopes that can explore the far reaches of the universe; and we can date the age of rocks that are millions of years old. But alongside the brilliant successes stand conspicuous underachievements and outright failures. We cannot build robots that can move with the agility of animals or humans; we cannot automate driving in heavy traffic; we cannot translate from one language to another at the level of a human interpreter; we cannot create programs that can summarize non-trivial stories; our ability to model the behavior of economic systems leaves much to be desired; and we cannot build machines that can compete with children in the performance of a wide variety of physical and cognitive tasks. It may be argued that underlying the underachievements and failures is the unavailability of a methodology for reasoning and computing with perceptions rather than measurements. An outline of such a methodology--referred to as a computational theory of perceptions--is presented in this paper. The computational theory of perceptions (CTP) is based on the methodology of CW. In CTP, words play the role of labels of perceptions, and, more generally, perceptions are expressed as propositions in a natural language. CW-based techniques are employed to translate propositions expressed in a natural language into what is called the Generalized Constraint Language (GCL). In this language, the meaning of a proposition is expressed as a generalized constraint, X isr R, where X is the constrained variable, R is the constraining relation, and isr is a variable copula in which r is an indexing variable whose value defines the way in which R constrains X. Among the basic types of constraints are possibilistic, veristic, probabilistic, random set, Pawlak set, fuzzy graph, and usuality. The wide variety of constraints in GCL makes GCL a much more expressive language than the language of predicate logic. In CW, the initial and terminal data sets, IDS and TDS, are assumed to consist of propositions expressed in a natural language. These propositions are translated, respectively, into antecedent and consequent constraints. Consequent constraints are derived from antecedent constraints through the use of rules of constraint propagation. The principal constraint propagation rule is the generalized extension principle. (ABSTRACT TRUNCATED)
Cubature Kalman Filters In this paper, we present a new nonlinear filter for high-dimensional state estimation, which we have named the cubature Kalman filter (CKF). The heart of the CKF is a spherical-radial cubature rule, which makes it possible to numerically compute multivariate moment integrals encountered in the nonlinear Bayesian filter. Specifically, we derive a third-degree spherical-radial cubature rule that provides a set of cubature points scaling linearly with the state-vector dimension. The CKF may therefore provide a systematic solution for high-dimensional nonlinear filtering problems. The paper also includes the derivation of a square-root version of the CKF for improved numerical stability. The CKF is tested experimentally in two nonlinear state estimation problems. In the first problem, the proposed cubature rule is used to compute the second-order statistics of a nonlinearly transformed Gaussian random variable. The second problem addresses the use of the CKF for tracking a maneuvering aircraft. The results of both experiments demonstrate the improved performance of the CKF over conventional nonlinear filters.
Informative Sensing Compressed sensing is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given the distribution of signals we wish to measure, what are the optimal set of linear projections for compressed sensing? We consider the problem of finding a small number of l inear projections that are maximally informative about the signal. Formally, we use the InfoMax criterion and seek to maximize the mutual information between the signal, x, and the (possibly noisy) projection y = Wx. We show that in general the optimal projections are not the principal components of the data nor random projections, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the knowledge of distribution. We present analytic solutions for certain special cases including natural images. In particular, for natural images, the near-optimal projec tions are bandwise random, i.e., incoherent to the sparse bases at a particular frequency band but with more weights on the low-frequencies, which has a physical relation to the multi-resolution representatio n of images.
Reduce and Boost: Recovering Arbitrary Sets of Jointly Sparse Vectors The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of nonadaptive linear measurements, under appropriate conditions on the measurement matrix. The vector model has been extended both theoretically and practically to a finite set of sparse vectors sharing a common sparsity pattern. In this paper, we treat a broader framework in which the goal is to recover a possibly infinite set of jointly sparse vectors. Extending existing algorithms to this model is difficult due to the infinite structure of the sparse vector set. Instead, we prove that the entire infinite set of sparse vectors can be recovered by solving a single, reduced-size finite-dimensional problem, corresponding to recovery of a finite set of sparse vectors. We then show that the problem can be further reduced to the basic model of a single sparse vector by randomly combining the measurements. Our approach is exact for both countable and uncountable sets, as it does not rely on discretization or heuristic techniques. To efficiently find the single sparse vector produced by the last reduction step, we suggest an empirical boosting strategy that improves the recovery ability of any given suboptimal method for recovering a sparse vector. Numerical experiments on random data demonstrate that, when applied to infinite sets, our strategy outperforms discretization techniques in terms of both run time and empirical recovery rate. In the finite model, our boosting algorithm has fast run time and much higher recovery rate than known popular methods.
R-POPTVR: a novel reinforcement-based POPTVR fuzzy neural network for pattern classification. In general, a fuzzy neural network (FNN) is characterized by its learning algorithm and its linguistic knowledge representation. However, it does not necessarily interact with its environment when the training data is assumed to be an accurate description of the environment under consideration. In interactive problems, it would be more appropriate for an agent to learn from its own experience through interactions with the environment, i.e., reinforcement learning. In this paper, three clustering algorithms are developed based on the reinforcement learning paradigm. This allows a more accurate description of the clusters as the clustering process is influenced by the reinforcement signal. They are the REINFORCE clustering technique I (RCT-I), the REINFORCE clustering technique II (RCT-II), and the episodic REINFORCE clustering technique (ERCT). The integrations of the RCT-I, the RCT-II, and the ERCT within the pseudo-outer product truth value restriction (POPTVR), which is a fuzzy neural network integrated with the truth restriction value (TVR) inference scheme in its five layered feedforward neural network, form the RPOPTVR-I, the RPOPTVR-II, and the ERPOPTVR, respectively. The Iris, Phoneme, and Spiral data sets are used for benchmarking. For both Iris and Phoneme data, the RPOPTVR is able to yield better classification results which are higher than the original POPTVR and the modified POPTVR over the three test trials. For the Spiral data set, the RPOPTVR-II is able to outperform the others by at least a margin of 5.8% over multiple test trials. The three reinforcement-based clustering techniques applied to the POPTVR network are able to exhibit the trial-and-error search characteristic that yields higher qualitative performance.
Fuzzy Power Command Enhancement in Mobile Communications Systems
Performance and Quality Evaluation of a Personalized Route Planning System Advanced personalization of database applications is a big challenge, in particular for distributed mo- bile environments. We present several new results from a prototype of a route planning system. We demonstrate how to combine qualitative and quantitative preferences gained from situational aspects and from personal user preferences. For performance studies we a nalyze the runtime efficiency of the SR-Combine algorithm used to evaluate top-k queries. By determining the cost-ratio of random to sorted accesses SR-Combine can automati- cally tune its performance within the given system architecture. Top-k queries are generated by mapping linguis- tic variables to numerical weightings. Moreover, we analyze the quality of the query results by several test se- ries, systematically varying the mappings of the linguistic variables. We report interesting insights into this rather under-researched important topic. More investigations, incorporating also cognitive issues, need to be conducted in the future.
1.24
0.24
0.12
0.004385
0.002853
0.000533
0.000098
0.000008
0
0
0
0
0
0
Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. We discuss the arbitrary polynomial chaos (aPC), which has been subject of research in a few recent theoretical papers. Like all polynomial chaos expansion techniques, aPC approximates the dependence of simulation model output on model parameters by expansion in an orthogonal polynomial basis. The aPC generalizes chaos expansion techniques towards arbitrary distributions with arbitrary probability measures, which can be either discrete, continuous, or discretized continuous and can be specified either analytically (as probability density/cumulative distribution functions), numerically as histogram or as raw data sets. We show that the aPC at finite expansion order only demands the existence of a finite number of moments and does not require the complete knowledge or even existence of a probability density function. This avoids the necessity to assign parametric probability distributions that are not sufficiently supported by limited available data. Alternatively, it allows modellers to choose freely of technical constraints the shapes of their statistical assumptions. Our key idea is to align the complexity level and order of analysis with the reliability and detail level of statistical information on the input parameters. We provide conditions for existence and clarify the relation of the aPC to statistical moments of model parameters. We test the performance of the aPC with diverse statistical distributions and with raw data. In these exemplary test cases, we illustrate the convergence with increasing expansion order and, for the first time, with increasing reliability level of statistical input information. Our results indicate that the aPC shows an exponential convergence rate and converges faster than classical polynomial chaos expansion techniques.
A Kinship Function Approach to Robust and Probabilistic Optimization Under Polynomial Uncertainty In this paper, we study a class of robust design problems with polynomial dependence on the uncertainty. One of the main motivations for considering these problems comes from robust controller design, where one often encounters systems that depend polynomially on the uncertain parameters. This paper can be seen as integrated in the emerging area of probabilistic robustness, where a probabilistic relaxation of the original robust problem is adopted, thus requiring the satisfaction of the constraints not for all possible values of the uncertainty, but for most of them. Different from the randomized approach for tackling probabilistic relaxations, which is only guaranteed to provide soft bounds on the probability of satisfaction, we present a deterministic approach based on the novel concept of kinship function introduced in this paper. This allows the development of an original framework, which leads to easily computable deterministic convex relaxations of the probabilistic problem. In particular, optimal polynomial kinship functions are introduced, which can be computed a priori and once for all and provide the “best convex bound” on the probability of constraint violation. More importantly, it is proven that the solution of the relaxed problem converges to that of the original robust optimization problem as the degree of the optimal polynomial kinship function increases. Furthermore, by relying on quadrature formulas for computation of integrals of polynomials, it is shown that the computational complexity of the proposed approach is polynomial in the number of uncertain parameters. Finally, unlike other deterministic approaches to robust polynomial optimization, the number of variables in the ensuing optimization problem is not increased by the proposed approximation. An important feature of this approach is that a significant amount of the computational burden is shifted to a one-time offline computation whose results can be stored and provided to - - the end-user.
Stochastic Discrete Equation Method (sDEM) for two-phase flows A new scheme for the numerical approximation of a five-equation model taking into account Uncertainty Quantification (UQ) is presented. In particular, the Discrete Equation Method (DEM) for the discretization of the five-equation model is modified for including a formulation based on the adaptive Semi-Intrusive (aSI) scheme, thus yielding a new intrusive scheme (sDEM) for simulating stochastic two-phase flows. Some reference test-cases are performed in order to demonstrate the convergence properties and the efficiency of the overall scheme. The propagation of initial conditions uncertainties is evaluated in terms of mean and variance of several thermodynamic properties of the two phases.
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.
A one-time truncate and encode multiresolution stochastic framework In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten¿s framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan-Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.
Galerkin Methods for Stochastic Hyperbolic Problems Using Bi-Orthogonal Polynomials This work is concerned with scalar transport equations with random transport velocity. We first give some sufficient conditions that can guarantee the solution to be in appropriate random spaces. Then a Galerkin method using bi-orthogonal polynomials is proposed, which decouples the equation in the random spaces, yielding a sequence of uncoupled equations. Under the assumption that the random wave field has a structure of the truncated KL expansion, a principle on how to choose the orders of the approximated polynomial spaces is given based on the sensitivity analysis in the random spaces. By doing this, the total degree of freedom can be reduced significantly. Numerical experiments are carried out to illustrate the efficiency of the proposed method.
A non-adapted sparse approximation of PDEs with stochastic inputs We propose a method for the approximation of solutions of PDEs with stochastic coefficients based on the direct, i.e., non-adapted, sampling of solutions. This sampling can be done by using any legacy code for the deterministic problem as a black box. The method converges in probability (with probabilistic error bounds) as a consequence of sparsity and a concentration of measure phenomenon on the empirical correlation between samples. We show that the method is well suited for truly high-dimensional problems.
A comparison of three methods for selecting values of input variables in the analysis of output from a computer code Two types of sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies. These plans are shown to be improvements over simple random sampling with respect to variance for a class of estimators which includes the sample mean and the empirical distribution function.
Tensor rank is NP-complete We prove that computing the rank of a three-dimensional tensor over any finite field is NP-complete. Over the rational numbers the problem is NP-hard.
Preference Modelling ABSTRACT This paper provides the reader with a presentation of preference modelling fundamental notions as well as some recent results in this field. Preference modelling is an inevitable step in a variety of fields: economy, sociology, psychology, mathematical programming, even medicine, archaeology, and obviously decision analysis. Our notation and some basic definitions, such as those of binary relation, properties and ordered sets, are presented at the beginning of the paper. We start by discussing different reasons for constructing a model or preference. We then go through a number,of issues that influence the construction of preference models. Different formalisations besides classical logic such as fuzzy sets and non-classical logics become,necessary. We then present different types of preference structures reflecting the behavior of a decision-maker: classical, extended and valued ones. It is relevant to have a numerical representation of preferences: functional representations, value functions. The concepts of thresholds and minimal representation are also introduced in this section. In section 7, we briefly explore the concept of deontic logic (logic of preference) and other formalisms associated with "compact representation of preferences" introduced for spe-
Fuzzy set theoretical approach to document retrieval The aim of a document retrieval system is to issue documents which contain the information needed by a given user of an information system. The process of retrieving documents in response to a given query is carried out by means of the search patterns of these documents and the query. It is thus clear that the quality of this process, i.e. the pertinence of the information system response to the information need of a given user depends on the degree of accuracy in which document and query contents are represented by their search patterns. It seems obvious that the weighting of descriptors entering document search patterns improves the quality of the document retrieval process.
VariaSim: simulating circuits and systems in the presence of process variability In this paper, we present VariaSim, the publicly available Static Statistical Timing Analysis (SSTA) Tool from Duke University. VariaSim enables researchers to analyze the impact of CMOS process variability on the behavior of circuits and systems.
The Inherent Indistinguishability in Fuzzy Systems This paper provides an overview of fuzzy systems from the viewpoint of similarity relations. Similarity relations turn out to be an appealing framework in which typical concepts and techniques applied in fuzzy systems and fuzzy control can be better understood and interpreted. They can also be used to describe the indistinguishability inherent in any fuzzy system that cannot be avoided.
Stochastic Behavioral Modeling and Analysis for Analog/Mixed-Signal Circuits It has become increasingly challenging to model the stochastic behavior of analog/mixed-signal (AMS) circuits under large-scale process variations. In this paper, a novel moment-matching-based method has been proposed to accurately extract the probabilistic behavioral distributions of AMS circuits. This method first utilizes Latin hypercube sampling coupling with a correlation control technique to generate a few samples (e.g., sample size is linear with number of variable parameters) and further analytically evaluate the high-order moments of the circuit behavior with high accuracy. In this way, the arbitrary probabilistic distributions of the circuit behavior can be extracted using moment-matching method. More importantly, the proposed method has been successfully applied to high-dimensional problems with linear complexity. The experiments demonstrate that the proposed method can provide up to 1666X speedup over crude Monte Carlo method for the same accuracy.
1.020533
0.02
0.02
0.013333
0.006667
0.002857
0.000324
0.000003
0
0
0
0
0
0
A multi-stream adaptation framework for bandwidth management in 3D tele-immersion Tele-immersive environments will improve the state of collaboration among distributed participants. However, along with the promise a new set of challenges have emerged including the real-time acquisition, streaming and rendering of 3D scenes to convey a realistic sense of immersive spaces. Unlike 2D video conferencing, a 3D tele-immersive environment employs multiple 3D cameras to cover a much wider field of view, thus generating a very large volume of data that need to be carefully coordinated, organized, and synchronized for Internet transmission, rendering and display. This is a challenging task and a dynamic bandwidth management must be in place. To achieve this goal, we propose a multi-stream adaptation framework for bandwidth management in 3D tele-immersion. The adaptation framework relies on the hierarchy of mechanisms and services that exploits the semantic link of multiple 3D video streams in the tele-immersive environment. We implement a prototype of the framework that integrates semantic stream selection, content adaptation, and 3D data compression services with user preference. The experimental results have demonstrated that the framework shows a good quality of the resulting composite 3D rendered video in case of sufficient bandwidth, while it adapts individual 3D video streams in a coordinated and user-friendly fashion, and yields graceful quality degradation in case of low bandwidth availability.
Adaptive streaming of multi-view video over P2P networks In this paper, we propose a novel solution for the adaptive streaming of 3D representations in the form of multi-view video by utilizing P2P overlay networks to assist the media delivery and minimize the bandwidth requirement at the server side. Adaptation to diverse network conditions is performed regarding the features of human perception to maximize the perceived 3D. We have performed subjective tests to characterize these features and determined the best adaptation method to achieve the highest possible perceived quality. Moreover, we provide a novel method for mapping from scalable video elementary stream to torrent-like data chunks for adaptive video streaming and provide an optimized windowing mechanism that ensures timely delivery of the content over yanlis gibi. The paper also describes techniques generating scalable video chunks and methods for determining system parameters such as chunk size and window length.
A reliable decentralized Peer-to-Peer Video-on-Demand system using helpers. We propose a decentralized Peer-to-Peer (P2P) Videoon-Demand (VoD) system. The traditional data center architecture is eliminated and is replaced by a large set of distributed, dynamic and individually unreliable helpers. The system leverages the strength of numbers to effect reliable cooperative content distribution, removing the drawbacks of conventional data center architectures including complexity of maintenance, high power consumption and lack of scalability. In the proposed VoD system, users and helper "servelets" cooperate in a P2P manner to deliver the video stream. Helpers are preloaded with only a small fraction of parity coded video data packets, and form into swarms each serving partial video content. The total number of helpers is optimized to guarantee high quality of service. In cases of helper churn, the helper network is also able to regenerate itself by users and helpers working cooperatively to repair the lost data, which yields a highly reliable system. Analysis and simulation results corroborate the feasibility and effectiveness of the proposed architecture.
Joint video/depth/FEC rate allocation with considering 3D visual saliency for scalable 3D video streaming For robust video plus depth based 3D video streaming, video, depth and packet-level forward error correction (FEC) can provide many rate combinations with various 3D visual qualities to adapt to the dynamic channel conditions. Video/depth/FEC rate allocation under the channel bandwidth constraint is an important optimization problem for robust 3D video streaming. This paper proposes a joint video/depth/FEC rate allocation method by maximizing the receiver's 3D visual quality. Through predicting the perceptual 3D visual qualities of the different video/depth/FEC rate combinations, the optimal GOP-level video/depth/FEC rate combination can be found. Further, the selected FEC rates are unequally assigned to different levels of 3D saliency regions within each video/depth frame. The effectiveness of the proposed 3D saliency based joint video/depth/FEC rate allocation method for scalable 3D video streaming is validated by extensive experimental results.
Bandwidth-aware multiple multicast tree formation for P2P scalable video streaming using hierarchical clusters Peer-to-peer (P2P) video streaming is a promising method for multimedia distribution over the Internet, yet many problems remain to be solved such as providing the best quality of service to each peer in proportion to its available resources, low-delay, and fault tolerance. In this paper, we propose a new bandwidth-aware multiple multicast tree formation procedure built on top of a hierarchical cluster based P2P overlay architecture for scalable video (SVC) streaming. The tree formation procedure considers number of sources, SVC layer rates available at each source, as well as delay and available bandwidth over links in an attempt to maximize the quality of received video at each peer. Simulations are performed on NS2 with 500 nodes to demonstrate that the overall performance of the system in terms of average received video quality of all peers is significantly better if peers with higher available bandwidth are placed higher up in the trees and peers with lower bandwidth are near the leaves.
BASS: BitTorrent Assisted Streaming System for Video-on-Demand This paper introduces a hybrid server/P2P stream- ing system called BitTorrent-Assisted Streaming System (BASS) for large-scale Video-on-Demand (VoD) services. By distributing the load among P2P connections as well as maintaining active server connections, BASS can increase the system scalability while decreasing media playout wait times. To analyze the benefits of BASS, we examine torrent trace data collected in the first week of distribution for Fedora Core 3 and develop an empirical model of BitTorrent client performance. Based on this, we run trace- based simulations to evaluate BASS and show that it is more scalable than current unicast solutions and can greatly decrease the average waiting time before playback.
State of the Art in Stereoscopic and Autostereoscopic Displays Underlying principles of stereoscopic direct-view displays, binocular head-mounted displays, and autostereoscopic direct-view displays are explained and some early work as well as the state of the art in those technologies are reviewed. Stereoscopic displays require eyewear and can be categorized based on the multiplexing scheme as: 1) color multiplexed (old technology but there are some recent developments; low-quality due to color reproduction and crosstalk issues; simple and does not require additional electronics hardware); 2) polarization multiplexed (requires polarized light output and polarization-based passive eyewear; high-resolution and high-quality displays available); and 3) time multiplexed (requires faster display hardware and active glasses synchronized with the display; high-resolution commercial products available). Binocular head-mounted displays can readily provide 3-D, virtual images, immersive experience, and more possibilities for interactive displays. However, the bulk of the optics, matching of the left and right ocular images and obtaining a large field of view make the designs quite challenging. Some of the recent developments using unconventional optical relays allow for thin form factors and open up new possibilities. Autostereoscopic displays are very attractive as they do not require any eyewear. There are many possibilities in this category including: two-view (the simplest implementations are with a parallax barrier or a lenticular screen), multiview, head tracked (requires active optics to redirect the rays to a moving viewer), and super multiview (potentially can solve the accommodation-convergence mismatch problem). Earlier 3-D booms did not last long mainly due to the unavailability of enabling technologies and the content. Current developments in the hardware technologies provide a renewed interest in 3-D displays both from the consumers and the display manufacturers, which is evidenced by the recent commercial products and new r esearch results in this field.
View synthesis prediction for multiview video coding We propose a rate-distortion-optimized framework that incorporates view synthesis for improved prediction in multiview video coding. In the proposed scheme, auxiliary information, including depth data, is encoded and used at the decoder to generate the view synthesis prediction data. The proposed method employs optimal mode decision including view synthesis prediction, and sub-pixel reference matching to improve prediction accuracy of the view synthesis prediction. Novel variants of the skip and direct modes are also presented, which infer the depth and correction vector information from neighboring blocks in a synthesized reference picture to reduce the bits needed for the view synthesis prediction mode. We demonstrate two multiview video coding scenarios in which view synthesis prediction is employed. In the first scenario, the goal is to improve the coding efficiency of multiview video where block-based depths and correction vectors are encoded by CABAC in a lossless manner on a macroblock basis. A variable block-size depth/motion search algorithm is described. Experimental results demonstrate that view synthesis prediction does provide some coding gains when combined with disparity-compensated prediction. In the second scenario, the goal is to use view synthesis prediction for reducing rate overhead incurred by transmitting depth maps for improved support of 3DTV and free-viewpoint video applications. It is assumed that the complete depth map for each view is encoded separately from the multiview video and used at the receiver to generate intermediate views. We utilize this information for view synthesis prediction to improve overall coding efficiency. Experimental results show that the rate overhead incurred by coding depth maps of varying quality could be offset by utilizing the proposed view synthesis prediction techniques to reduce the bitrate required for coding multiview video.
A_PSQA: Efficient real-time video streaming QoE tool in a future media internet context Quality of Experience (QoE) is the key criteria for evaluating the Media Services. Unlike objective Quality of Service (QoS) metrics, QoE is more accurate to reflect the user experience as it considers human visual system and its complex behavior towards distortions in the displayed video sequence. In this paper, we present a new QoE tool solution, named ALICANTE Pseudo Subjective Quality Assessment (A_PSQA). It relies on a No-Reference QoE measuring approach, hybrid between subjective and objective methods fully functional in Future Media Internet context. To validate this approach, we deployed a video streaming platform, and considered different video sequences having different characteristics (low/high motion, quality, etc.). We then compared the results of A_PSQA with two Full-References methods (SSIM and PSNR) and two No-References approaches. Obtained results demonstrate that A_PSQA shows a higher correlation with subjective quality ratings (MOS) than all others methods.
Data compression and harmonic analysis In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon's R(D) theory in the case of Gaussian stationary processes, which says that transforming into a Fourier basis followed by block coding gives an optimal lossy compression technique; practical developments like transform-based image compression have been inspired by this result. In this paper we also discuss connections perhaps less familiar to the information theory community, growing out of the field of harmonic analysis. Recent harmonic analysis constructions, such as wavelet transforms and Gabor transforms, are essentially optimal transforms for transform coding in certain settings. Some of these transforms are under consideration for future compression standards. We discuss some of the lessons of harmonic analysis in this century. Typically, the problems and achievements of this field have involved goals that were not obviously related to practical data compression, and have used a language not immediately accessible to outsiders. Nevertheless, through an extensive generalization of what Shannon called the “sampling theorem”, harmonic analysis has succeeded in developing new forms of functional representation which turn out to have significant data compression interpretations. We explain why harmonic analysis has interacted with data compression, and we describe some interesting recent ideas in the field that may affect data compression in the future
Speaker Verification Using Adapted Gaussian Mixture Models Reynolds, Douglas A., Quatieri, Thomas F., and Dunn, Robert B., Speaker Verification Using Adapted Gaussian Mixture Models, Digital Signal Processing10(2000), 19 41.In this paper we describe the major elements of MIT Lincoln Laboratory's Gaussian mixture model (GMM)-based speaker verification system used successfully in several NIST Speaker Recognition Evaluations (SREs). The system is built around the likelihood ratio test for verification, using simple but effective GMMs for likelihood functions, a universal background model (UBM) for alternative speaker representation, and a form of Bayesian adaptation to derive speaker models from the UBM. The development and use of a handset detector and score normalization to greatly improve verification performance is also described and discussed. Finally, representative performance benchmarks and system behavior experiments on NIST SRE corpora are presented.
RSPOP: rough set-based pseudo outer-product fuzzy rule identification algorithm. System modeling with neuro-fuzzy systems involves two contradictory requirements: interpretability verses accuracy. The pseudo outer-product (POP) rule identification algorithm used in the family of pseudo outer-product-based fuzzy neural networks (POPFNN) suffered from an exponential increase in the number of identified fuzzy rules and computational complexity arising from high-dimensional data. This decreases the interpretability of the POPFNN in linguistic fuzzy modeling. This article proposes a novel rough set-based pseudo outer-product (RSPOP) algorithm that integrates the sound concept of knowledge reduction from rough set theory with the POP algorithm. The proposed algorithm not only performs feature selection through the reduction of attributes but also extends the reduction to rules without redundant attributes. As many possible reducts exist in a given rule set, an objective measure is developed for POPFNN to correctly identify the reducts that improve the inferred consequence. Experimental results are presented using published data sets and real-world application involving highway traffic flow prediction to evaluate the effectiveness of using the proposed algorithm to identify fuzzy rules in the POPFNN using compositional rule of inference and singleton fuzzifier (POPFNN-CRI(S)) architecture. Results showed that the proposed rough set-based pseudo outer-product algorithm reduces computational complexity, improves the interpretability of neuro-fuzzy systems by identifying significantly fewer fuzzy rules, and improves the accuracy of the POPFNN.
Reweighted minimization model for MR image reconstruction with split Bregman method. Magnetic resonance (MR) image reconstruction is to get a practicable gray-scale image from few frequency domain coefficients. In this paper, different reweighted minimization models for MR image reconstruction are studied, and a novel model named reweighted wavelet+TV minimization model is proposed. By using split Bregman method, an iteration minimization algorithm for solving this new model is obtained, and its convergence is established. Numerical simulations show that the proposed model and its algorithm are feasible and highly efficient.
A game-theoretic multipath routing for video-streaming services over Mobile Ad Hoc Networks The number of portable devices capable of maintaining wireless communications has increased considerably in the last decade. Such mobile nodes may form a spontaneous self-configured network connected by wireless links to constitute a Mobile Ad Hoc Network (MANET). As the number of mobile end users grows the demand of multimedia services, such as video-streaming, in such networks is envisioned to increase as well. One of the most appropriate video coding technique for MANETs is layered MPEG-2 VBR, which used with a proper multipath routing scheme improves the distribution of video streams. In this article we introduce a proposal called g-MMDSR (game theoretic-Multipath Multimedia Dynamic Source Routing), a cross-layer multipath routing protocol which includes a game theoretic approach to achieve a dynamic selection of the forwarding paths. The proposal seeks to improve the own benefits of the users whilst using the common scarce resources efficiently. It takes into account the importance of the video frames in the decoding process, which outperforms the quality of the received video. Our scheme has proved to enhance the performance of the framework and the experience of the end users. Simulations have been carried out to show the benefits of our proposal under different situations where high interfering traffic and mobility of the nodes are present.
1.052597
0.055
0.050388
0.05
0.027702
0.018527
0.006299
0.000129
0.000006
0
0
0
0
0
A Knowledge Based Recommender System with Multigranular Linguistic Information. Recommender systems are applications that have emerged in the e-commerce area in order to assist users in their searches in elctronic shops. These shops usu-ally offer a wide range of items to satisfy the neccessi-ties of a great variety of users. Nevertheless, search-ing in such a wide range of items could be a very dif-ficult and tedious task. Recommender Systems assist users to find items by means of recommendations based on information provided from different sources such as: other users, experts, etc. Most of the recommender sys-tems force users to provide their preferences or neces-sities using an unique numerical scale of information fixed in advance. Normally, this information is usually related to opinions, tastes and perceptions, and there- fore, it means that it is usually better expressed in a qualitative way, with linguistics terms, than in a quan-titative way, with precise numbers. In this contribution, we propose a Knowledge Based Recommender System that uses the fuzzy linguistic approach to define a flexi-ble framework that captures the uncertainty of the user's preferences. Thus, this framework will allow users to express their necessities in a different scale, closer to their knowledge, from the scale used to describe the items.
Fuzzy Grey Gm(1,1) Model Under Fuzzy System Grey GM(1, 1) forecasting model is a kind of short-term forecasting method which has been successfully applied in management and engineering problems with as little as four data. However, when a new system is constructed, the system is uncertain and variable so that the collected data is usually of fuzzy type, which could not be applied to grey GM(1, 1) model forecast. In order to cope with such problem, the fuzzy system derived from collected data is considered by the fuzzy grey controlled variable to derive a fuzzy grey GM(1, 1) model to forecast the extrapolative values under the fuzzy system. Finally, an example is described for illustration.
Modeling vagueness in information retrieval This paper reviews some applications of fuzzy set theory to model flexible information retrieval systems, i.e., systems that can represent and interpret the vagueness typical of human communication and reasoning. The paper focuses on the following topics: a description of fuzzy indexing procedures defined to represent structured documents, the definition of flexible query languages which allow the expression of vague selection conditions, and some fuzzy associative retrieval mechanisms based on fuzzy pseudo-thesauri of terms and fuzzy clustering techniques.
A linguistic decision support model for QoS priorities in networking Networking resources and technologies are mission-critical in organizations, companies, universities, etc. Their relevance implies the necessity of including tools for Quality of Service (QoS) that assure the performance of such critical services. To address this problem and guarantee a sufficient bandwidth transmission for critical applications/services, different strategies and QoS tools based on the administrator's knowledge may be used. However it is common that network administrators might have a nonrealistic view about the needs of users and organizations. Consequently it seems convenient to take into account such users' necessities for traffic prioritization even though they could involve uncertainty and subjectivity. This paper proposes a linguistic decision support model for traffic prioritization in networking, which uses a group decision making process that gathers user's needs in order to improve organizational productivity. This model manages the inherent uncertainty, imprecision and vagueness of users' necessities, modeling the information by means of linguistic information and offering a flexible framework that provides multiple linguistic scales to the experts, according to their degree of knowledge. Thereby, this decision support model will consist of two processes: (i) A linguistic decision analysis process that evaluates and assesses priorities for QoS of the network services according to users and organizations' necessities. (ii) A priority assignment process that sets up the network traffic in agreement with the previous values.
Applying multi-objective evolutionary algorithms to the automatic learning of extended Boolean queries in fuzzy ordinal linguistic information retrieval systems The performance of information retrieval systems (IRSs) is usually measured using two different criteria, precision and recall. Precision is the ratio of the relevant documents retrieved by the IRS in response to a user's query to the total number of documents retrieved, whilst recall is the ratio of the number of relevant documents retrieved to the total number of relevant documents for the user's query that exist in the documentary database. In fuzzy ordinal linguistic IRSs (FOLIRSs), where extended Boolean queries are used, defining the user's queries in a manual way is usually a complex task. In this contribution, our interest is focused on the automatic learning of extended Boolean queries in FOLIRSs by means of multi-objective evolutionary algorithms considering both mentioned performance criteria. We present an analysis of two well-known general-purpose multi-objective evolutionary algorithms to learn extended Boolean queries in FOLIRSs. These evolutionary algorithms are the non-dominated sorting genetic algorithm (NSGA-II) and the strength Pareto evolutionary algorithm (SPEA2).
Toward developing agility evaluation of mass customization systems using 2-tuple linguistic computing Mass customization (MC) relates to the ability to provide individually designed products and services to every customer through high process flexibility and integration. For responding to the mass customization trend it is necessary to develop an agility-based manufacturing system to catch on the traits involved in MC. An MC manufacturing agility evaluation approach based on concepts of TOPSIS is proposed through analyzing the agility of organization management, product design, processing manufacture, partnership formation capability and integration of information system. The 2-tuple fuzzy linguistic computing manner to transform the heterogeneous information assessed by multiple experts into an identical decision domain is inherent in the proposed method. It is expected to aggregate experts' heterogeneous information, and offer sufficient and conclusive information for evaluating the agile manufacturing alternatives. And then a suitable agile system for implementing MC can be established.
Dealing with heterogeneous information in engineering evaluation processes Before selecting a design for a large engineering system several design proposals are evaluated studying different key aspects. In such a design assessment process, different criteria need to be evaluated, which can be of both of a quantitative and qualitative nature, and the knowledge provided by experts may be vague and/or incomplete. Consequently, the assessment problems may include different types of information (numerical, linguistic, interval-valued). Experts are usually forced to provide knowledge in the same domain and scale, resulting in higher levels of uncertainty. In this paper, we propose a flexible framework that can be used to model the assessment problems in different domains and scales. A fuzzy evaluation process in the proposed framework is investigated to deal with uncertainty and manage heterogeneous information in engineering evaluation processes.
Computing with words in decision making: foundations, trends and prospects Computing with Words (CW) methodology has been used in several different environments to narrow the differences between human reasoning and computing. As Decision Making is a typical human mental process, it seems natural to apply the CW methodology in order to create and enrich decision models in which the information that is provided and manipulated has a qualitative nature. In this paper we make a review of the developments of CW in decision making. We begin with an overview of the CW methodology and we explore different linguistic computational models that have been applied to the decision making field. Then we present an historical perspective of CW in decision making by examining the pioneer papers in the field along with its most recent applications. Finally, some current trends, open questions and prospects in the topic are pointed out.
A comparative analysis of score functions for multiple criteria decision making in intuitionistic fuzzy settings The purpose of this paper was to conduct a comparative study of score functions in multiple criteria decision analysis based on intuitionistic fuzzy sets. The concept of score functions has been conceptualized and widely applied to multi-criteria decision-making problems. There are several types of score functions that can identify the mixed results of positive and negative parts in a bi-dimensional framework of intuitionistic fuzzy sets. Considering various perspectives on score functions, the present study adopts an order of preference based on similarity to the ideal solution as the main structure to estimate the importance of different criteria and compute optimal multi-criteria decisions in intuitionistic fuzzy evaluation settings. An experimental analysis is conducted to examine the relationship between the results yielded by different score functions, considering the average Spearman correlation coefficients and contradiction rates. Furthermore, additional discussions clarify the relative differences in the ranking orders obtained from different combinations of numbers of alternatives and criteria as well as different importance conditions.
Uncertainty measures for interval type-2 fuzzy sets Fuzziness (entropy) is a commonly used measure of uncertainty for type-1 fuzzy sets. For interval type-2 fuzzy sets (IT2 FSs), centroid, cardinality, fuzziness, variance and skewness are all measures of uncertainties. The centroid of an IT2 FS has been defined by Karnik and Mendel. In this paper, the other four concepts are defined. All definitions use a Representation Theorem for IT2 FSs. Formulas for computing the cardinality, fuzziness, variance and skewness of an IT2 FS are derived. These definitions should be useful in IT2 fuzzy logic systems design using the principles of uncertainty, and in measuring the similarity between two IT2 FSs.
A probabilistic definition of a nonconvex fuzzy cardinality The existing methods to assess the cardinality of a fuzzy set with finite support are intended to preserve the properties of classical cardinality. In particular, the main objective of researchers in this area has been to ensure the convexity of fuzzy cardinalities, in order to preserve some properties based on the addition of cardinalities, such as the additivity property. We have found that in order to solve many real-world problems, such as the induction of fuzzy rules in Data Mining, convex cardinalities are not always appropriate. In this paper, we propose a possibilistic and a probabilistic cardinality of a fuzzy set with finite support. These cardinalities are not convex in general, but they are most suitable for solving problems and, contrary to the generalizing opinion, they are found to be more intuitive for humans. Their suitability relies mainly on the fact that they assume dependency among objects with respect to the property "to be in a fuzzy set". The cardinality measures are generalized to relative ones among pairs of fuzzy sets. We also introduce a definition of the entropy of a fuzzy set by using one of our probabilistic measures. Finally, a fuzzy ranking of the cardinality of fuzzy sets is proposed, and a definition of graded equipotency is introduced.
A large-scale study of failures in high-performance computing systems Designing highly dependable systems requires a good understanding of failure characteristics. Unfortunately, little raw data on failures in large IT installations is publicly available. This paper analyzes failure data recently made publicy available by one of the largest high-performance computing sites. The data has been collected over the past 9 years at Los Alamos National Laboratory and includes 23000 failures recorded on more than 20 different systems, mostly large clusters of SMP and NUMA nodes. We study the statistics of the data, including the root cause of failures, the mean time between failures, and the mean time to repair. We find for example that average failure rates differ wildly across systems, ranging from 20-1000 failures per year, and that time between failures is modeled well by a Weibull distribution with decreasing hazard rate. From one system to another, mean repair time varies from less than an hour to more than a day, and repair times are well modeled by a lognormal distribution.
A Target-Based Decision-Making Approach to Consumer-Oriented Evaluation Model for Japanese Traditional Crafts This paper deals with the evaluation of Japanese traditional crafts, in which product items are assessed according to the so-called “Kansei” features by means of the semantic differential method. For traditional crafts, decisions on which items to buy or use are usually influenced by personal feelings/characteristics; therefore, we shall propose a consumer-oriented evaluation model targeting these specific requests by consumers. Particularly, given a consumer's request, the proposed model aims to define an evaluation function that quantifies how well a product item meets the consumer's feeling preferences. An application to evaluating patterns of Kutani porcelain is conducted to illustrate how the proposed evaluation model works, in practice.
Thermal switching error versus delay tradeoffs in clocked QCA circuits The quantum-dot cellular automata (QCA) model offers a novel nano-domain computing architecture by mapping the intended logic onto the lowest energy configuration of a collection of QCA cells, each with two possible ground states. A four-phased clocking scheme has been suggested to keep the computations at the ground state throughout the circuit. This clocking scheme, however, induces latency or delay in the transmission of information from input to output. In this paper, we study the interplay of computing error behavior with delay or latency of computation induced by the clocking scheme. Computing errors in QCA circuits can arise due to the failure of the clocking scheme to switch portions of the circuit to the ground state with change in input. Some of these non-ground states will result in output errors and some will not. The larger the size of each clocking zone, i.e., the greater the number of cells in each zone, the more the probability of computing errors. However, larger clocking zones imply faster propagation of information from input to output, i.e., reduced delay. Current QCA simulators compute just the ground state configuration of a QCA arrangement. In this paper, we offer an efficient method to compute the N-lowest energy modes of a clocked QCA circuit. We model the QCA cell arrangement in each zone using a graph-based probabilistic model, which is then transformed into a Markov tree structure defined over subsets of QCA cells. This tree structure allows us to compute the N-lowest energy configurations in an efficient manner by local message passing. We analyze the complexity of the model and show it to be polynomial in terms of the number of cells, assuming a finite neighborhood of influence for each QCA cell, which is usually the case. The overall low-energy spectrum of multiple clocking zones is constructed by concatenating the low-energy spectra of the individual clocking zones. We demonstrate how the model can be used to study the tradeoff betwee- - n switching errors and clocking zones.
1.026533
0.026721
0.02658
0.025768
0.01347
0.010381
0.00496
0.001496
0.000172
0.000043
0.000001
0
0
0
Adaptive stochastic Galerkin FEM with hierarchical tensor representations. The solution of PDE with stochastic data commonly leads to very high-dimensional algebraic problems, e.g. when multiplicative noise is present. The Stochastic Galerkin FEM considered in this paper then suffers from the curse of dimensionality. This is directly related to the number of random variables required for an adequate representation of the random fields included in the PDE. With the presented new approach, we circumvent this major complexity obstacle by combining two highly efficient model reduction strategies, namely a modern low-rank tensor representation in the tensor train format of the problem and a refinement algorithm on the basis of a posteriori error estimates to adaptively adjust the different employed discretizations. The adaptive adjustment includes the refinement of the FE mesh based on a residual estimator, the problem-adapted stochastic discretization in anisotropic Legendre Wiener chaos and the successive increase of the tensor rank. Computable a posteriori error estimators are derived for all error terms emanating from the discretizations and the iterative solution with a preconditioned ALS scheme of the problem. Strikingly, it is possible to exploit the tensor structure of the problem to evaluate all error terms very efficiently. A set of benchmark problems illustrates the performance of the adaptive algorithm with higher-order FE. Moreover, the influence of the tensor rank on the approximation quality is investigated.
Transport Map Accelerated Markov Chain Monte Carlo We introduce a new framework for efficient sampling from complex probablity distributions, using a combination of transport maps and the Metropolis-Hastings rule. The core idea is to use deterministic couplings to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map an approximation of the Knothe-Rosenblatt rearrangement using information from previous Markov chain Monte Carlo (MCMC) states, via the solution of an optimization problem. This optimization problem is convex regardless of the form of the target distribution and can be solved efficiently without gradient information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems show order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per target density evaluation and per unit of wallclock time.
Non-intrusive Tensor Reconstruction for High-Dimensional Random PDEs. This paper examines a completely non-intrusive, sample-based method for the computation of functional low-rank solutions of high-dimensional parametric random PDEs, which have become an area of intensive research in Uncertainty Quantification (UQ). In order to obtain a generalized polynomial chaos representation of the approximate stochastic solution, a novel black-box rank-adapted tensor reconstruction procedure is proposed. The performance of the described approach is illustrated with several numerical examples and compared to (Quasi-)Monte Carlo sampling.
Bayesian inference with optimal maps We present a new approach to Bayesian inference that entirely avoids Markov chain simulation, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. We discuss various means of explicitly parameterizing the map and computing it efficiently through solution of an optimization problem, exploiting gradient information from the forward model when possible. The resulting algorithm overcomes many of the computational bottlenecks associated with Markov chain Monte Carlo. Advantages of a map-based representation of the posterior include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent posterior samples without additional likelihood evaluations or forward solves. The optimization approach also provides clear convergence criteria for posterior approximation and facilitates model selection through automatic evaluation of the marginal likelihood. We demonstrate the accuracy and efficiency of the approach on nonlinear inverse problems of varying dimension, involving the inference of parameters appearing in ordinary and partial differential equations.
A Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data This work proposes and analyzes a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems as in the Monte Carlo method. If the number of random variables needed to describe the input data is moderately large, full tensor product spaces are computationally expensive to use due to the curse of dimensionality. In this case the sparse grid approach is still expected to be competitive with the classical Monte Carlo method. Therefore, it is of major practical relevance to understand in which situations the sparse grid stochastic collocation method is more efficient than Monte Carlo. This work provides error estimates for the fully discrete solution using $L^q$ norms and analyzes the computational efficiency of the proposed method. In particular, it demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates. The derived estimates are then used to compare the method with Monte Carlo, indicating for which problems the former is more efficient than the latter. Computational evidence complements the present theory and shows the effectiveness of the sparse grid stochastic collocation method compared to full tensor and Monte Carlo approaches.
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
Some Defects in Finite-Difference Edge Finders This work illustrates and explains various artifacts in the output of five finite difference edge finders, those of J.F. Canny (1983, 1986), R.A. Boie et al. (1986) and R.A. Boie and I.J. Cox (1987), and three variations on that of D. Marr and E.C. Hildreth (1980), reimplemented with a common output format and method of noise suppression. These artifacts include gaps in boundaries, spurious boundaries, and deformation of region shape.
A Tutorial on Support Vector Machines for Pattern Recognition The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Reconstruction of a low-rank matrix in the presence of Gaussian noise. This paper addresses the problem of reconstructing a low-rank signal matrix observed with additive Gaussian noise. We first establish that, under mild assumptions, one can restrict attention to orthogonally equivariant reconstruction methods, which act only on the singular values of the observed matrix and do not affect its singular vectors. Using recent results in random matrix theory, we then propose a new reconstruction method that aims to reverse the effect of the noise on the singular value decomposition of the signal matrix. In conjunction with the proposed reconstruction method we also introduce a Kolmogorov–Smirnov based estimator of the noise variance.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
Directional relative position between objects in image processing: a comparison between fuzzy approaches The importance of describing relationships between objects has been highlighted in works in very different areas, including image understanding. Among these relationships, directional relative position relations are important since they provide an important information about the spatial arrangement of objects in the scene. Such concepts are rather ambiguous, they defy precise definitions, but human beings have a rather intuitive and common way of understanding and interpreting them. Therefore in this context, fuzzy methods are appropriate to provide consistent definitions that integrate both quantitative and qualitative knowledge, thus providing a computational representation and interpretation of imprecise spatial relations, expressed in a linguistic way, and including quantitative knowledge. Several fuzzy approaches have been developed in the literature, and the aim of this paper is to review and compare them according to their properties and according to the types of questions they seek to answer.
Fuzzy modeling of system behavior for risk and reliability analysis The main objective of the article is to permit the reliability analyst's/engineers/managers/practitioners to analyze the failure behavior of a system in a more consistent and logical manner. To this effect, the authors propose a methodological and structured framework, which makes use of both qualitative and quantitative techniques for risk and reliability analysis of the system. The framework has been applied to model and analyze a complex industrial system from a paper mill. In the quantitative framework, after developing the Petrinet model of the system, the fuzzy synthesis of failure and repair data (using fuzzy arithmetic operations) has been done. Various system parameters of managerial importance such as repair time, failure rate, mean time between failures, availability, and expected number of failures are computed to quantify the behavior in terms of fuzzy, crisp and defuzzified values. Further, to improve upon the reliability and maintainability characteristics of the system, in depth qualitative analysis of systems is carried out using failure mode and effect analysis (FMEA) by listing out all possible failure modes, their causes and effect on system performance. To address the limitations of traditional FMEA method based on risky priority number score, a risk ranking approach based on fuzzy and Grey relational analysis is proposed to prioritize failure causes.
Split Bregman iterative algorithm for sparse reconstruction of electrical impedance tomography In this paper, we present an evaluation of the use of split Bregman iterative algorithm for the L"1-norm regularized inverse problem of electrical impedance tomography. Simulations are performed to validate that our algorithm is competitive in terms of the imaging quality and computational speed in comparison with several state-of-the-art algorithms. Results also indicate that in contrast to the conventional L"2-norm regularization method and total variation (TV) regularization method, the L"1-norm regularization method can sharpen the edges and is more robust against data noises.
1.2
0.2
0.2
0.02
0.001087
0
0
0
0
0
0
0
0
0
POPFNN-AAR(S): a pseudo outer-product based fuzzy neural network A novel fuzzy neural network, the pseudo outer-product-based fuzzy neural network using the singleton fuzzifier together with the approximate analogical reasoning schema, is proposed in this paper. The network is referred to as the singleton fuzzifier POPFNN-AARS, the singleton fuzzifier POPFNN-AARS employs the approximate analogical reasoning schema (AARS) instead of the commonly used truth value restriction (TVR) method. This makes the structure and learning algorithms of the singleton fuzzifier POPFNN-AARS simple and conceptually clearer than those of the POPFNN-TVR model. Different similarity measures (SM) and modification functions (FM) for AARS are investigated. The structures and learning algorithms of the proposed singleton fuzzifer POPFNN-AARS are presented. Several sets of real-life data are used to test the performance of the singleton fuzzifier POPFNN-AARS and their experimental results are presented for detailed discussion
Sugeno controllers with a bounded number of rules are nowhere dense In literature various results can be found claiming that fuzzy controllers are universal approximators. In terms of topology this means that fuzzy controllers as subsets of adequate function spaces are dense. In this paper the topological structure of fuzzy controllers composed of a bounded number of rules is investigated. It turns out that these sets are nowhere dense (a topological notion indicating that the sets are: "almost discrete"). This means, that it is just the number of rules and, e.g. not the great variety of parameters of fuzzy controllers, why fuzzy controllers are universal approximators. (C) 1999 Elsevier Science B.V. All rights reserved.
Interpolation of fuzzy if-then rules by neural networks A number of approaches have been proposed for implementing fuzzy if-then rules with trainable multilayer feedforward neural networks. In these approaches, learning of neural networks is performed for fuzzy inputs and fuzzy targets. Because the standard back-propagation (BP) algorithm cannot be directly applied to fuzzy data, transformation of fuzzy data into non-fuzzy data or modification of the learning algorithm is required. Therefore the approaches for implementing fuzzy if-then rules can be classified into two main categories: introduction of preprocessors of fuzzy data and modification of the learning algorithm. In the first category, the standard BP algorithm can be employed after generating non-fuzzy data from fuzzy data by preprocessors. Two kinds of preprocessors based on membership values and level sets are examined in this paper. In the second category, the standard BP algorithm is modified to directly handle the level sets (i.e., intervals) of fuzzy data. This paper examines the ability of each approach to interpolate sparse fuzzy if-then rules. By computer simulations, high fitting ability of approaches in the first category and high interpolating ability of those in the second category are demonstrated.
Modeling and formulating fuzzy knowledge bases using neural networks We show how the determination of the firing level of a neuron can be viewed as a measure of possibility between two fuzzy sets, the weights of connection and the input. We then suggest a way to represent fuzzy production rules in a neural framework. Central to this representation is the notion that the linguistic variables associated with the rule, the antecedent and consequent values, are represented as weights in the resulting neural structure. The structure used to represent these fuzzy rules allows learning of the membership grades of the associated linguistic variables. A self-organization procedure for obtaining the nucleus of rules for a fuzzy knowledge base is presented.
Measures of similarity among fuzzy concepts: A comparative analysis Many measures of similarity among fuzzy sets have been proposed in the literature, and some have been incorporated into linguistic approximation procedures. The motivations behind these measures are both geometric and set-theoretic. We briefly review 19 such measures and compare their performance in a behavioral experiment. For crudely categorizing pairs of fuzzy concepts as either “similar” or “dissimilar,” all measures performed well. For distinguishing between degrees of similarity or dissimilarity, certain measures were clearly superior and others were clearly inferior; for a few subjects, however, none of the distance measures adequately modeled their similarity judgments. Measures that account for ordering on the base variable proved to be more highly correlated with subjects' actual similarity judgments. And, surprisingly, the best measures were ones that focus on only one “slice” of the membership function. Such measures are easiest to compute and may provide insight into the way humans judge similarity among fuzzy concepts.
Matrix Equations and Normal Forms for Context-Free Grammars The relationship between the set of productions of a context-free grammar and the corresponding set of defining equations is first pointed out. The closure operation on a matrix of strings is defined and this concept is used to formalize the solution to a set of linear equations. A procedure is then given for rewriting a context-free grammar in Greibach normal form, where the replacements string of each production begins with a terminal symbol. An additional procedure is given for rewriting the grammar so that each replacement string both begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular expressions over the total vocabulary of the grammar, as is required by Greibach's procedure.
Applications of type-2 fuzzy logic systems to forecasting of time-series In this paper, we begin with a type-1 fuzzy logic system (FLS), trained with noisy data, We then demonstrate how information about the noise in the training data can be incorporated into a type-2 FLS, which can be used to obtain bounds within which the true (noisefree) output is likely to lie. We do this with the example of a one-step predictor for the Mackey-Glass chaotic time-series [M.C, Mackey, L, Glass, Oscillation and chaos in physiological control systems, Science 197 (1977) 287-280], We also demonstrate how a type-2 .FLS can be used to obtain better predictions than those obtained with a type-1 FLS, (C) 1999 Elsevier Science Inc. All rights reserved.
Inpainting and Zooming Using Sparse Representations Representing the image to be inpainted in an appropriate sparse representation dictionary, and combining elements from Bayesian statistics and modern harmonic analysis, we introduce an expectation maximization (EM) algorithm for image inpainting and interpolation. From a statistical point of view, the inpainting/interpolation can be viewed as an estimation problem with missing data. Toward this goal, we propose the idea of using the EM mechanism in a Bayesian framework, where a sparsity promoting prior penalty is imposed on the reconstructed coefficients. The EM framework gives a principled way to establish formally the idea that missing samples can be recovered/interpolated based on sparse representations. We first introduce an easy and efficient sparse-representation-based iterative algorithm for image inpainting. Additionally, we derive its theoretical convergence properties. Compared to its competitors, this algorithm allows a high degree of flexibility to recover different structural components in the image (piecewise smooth, curvilinear, texture, etc.). We also suggest some guidelines to automatically tune the regularization parameter.
A method based on PSO and granular computing of linguistic information to solve group decision making problems defined in heterogeneous contexts. •Information granulation of linguistic information used in group decision making.•Granular Computing is used to made operational the linguistic information.•Linguistic information expressed in terms of information granules defined as sets.•The granulation of the linguistic terms is formulated as an optimization problem.•The distribution and semantics of the linguistic terms are not assumed a priori.
Genetic tuning of fuzzy rule deep structures preserving interpretability and its interaction with fuzzy rule set reduction Tuning fuzzy rule-based systems for linguistic fuzzy modeling is an interesting and widely developed task. It involves adjusting some of the components of the knowledge base without completely redefining it. This contribution introduces a genetic tuning process for jointly fitting the fuzzy rule symbolic representations and the meaning of the involved membership functions. To adjust the former component, we propose the use of linguistic hedges to perform slight modifications keeping a good interpretability. To alter the latter component, two different approaches changing their basic parameters and using nonlinear scaling factors are proposed. As the accomplished experimental study shows, the good performance of our proposal mainly lies in the consideration of this tuning approach performed at two different levels of significance. The paper also analyzes the interaction of the proposed tuning method with a fuzzy rule set reduction process. A good interpretability-accuracy tradeoff is obtained combining both processes with a sequential scheme: first reducing the rule set and subsequently tuning the model.
Estimation of (near) low-rank matrices with noise and high-dimensional scaling We study an instance of high-dimensional inference in which the goal is to estimate a matrix circle minus* is an element of R-m1xm2 on the basis of N noisy observations. The unknown matrix circle minus* is assumed to be either exactly low rank, or "near" low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider a standard M-estimator based on regularization by the nuclear or trace norm over matrices, and analyze its performance under high-dimensional scaling. We define the notion of restricted strong convexity (RSC) for the loss function, and use it to derive nonasymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly low-rank and approximately low rank matrices. We then illustrate consequences of this general theory for a number of specific matrix models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes and recovery of low-rank matrices from random projections. These results involve nonasymptotic random matrix theory to establish that the RSC condition holds, and to determine an appropriate choice of regularization parameter. Simulation results show excellent agreement with the high-dimensional scaling of the error predicted by our theory.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.112562
0.110125
0.055062
0.019312
0.000202
0.000015
0
0
0
0
0
0
0
0
Implementing Clenshaw-Curtis quadrature, II computing the cosine transformation In a companion paper to this, “I Methodology and Experiences,” the automatic Clenshaw-Curtis quadrature scheme was described and how each quadrature formula used in the scheme requires a cosine transformation of the integrand values was shown. The high cost of these cosine transformations has been a serious drawback in using Clenshaw-Curtis quadrature. Two other problems related to the cosine transformation have also been troublesome. First, the conventional computation of the cosine transformation by recurrence relation is numerically unstable, particularly at the low frequencies which have the largest effect upon the integral. Second, in case the automatic scheme should require refinement of the sampling, storage is required to save the integrand values after the cosine transformation is computed.This second part of the paper shows how the cosine transformation can be computed by a modification of the fast Fourier transform and all three problems overcome. The modification is also applicable in other circumstances requiring cosine or sine transformations, such as polynomial interpolation through the Chebyshev points.
Series Methods For Integration
A user-friendly method for computing indefinite integrals of oscillatory functions. For indefinite integrals Q ( f ; x , ω ) = ź - 1 x f ( t ) e i ω t d t ( x ź - 1 , 1 ) Torii and the first author (Hasegawa and Torii, 1987) developed a quadrature method of Clenshaw-Curtis (C-C) type. Its improvement was made and combined with Sidi's m W -transformation by Sidi and the first author (Hasegawa and Sidi, 1996) to compute infinite oscillatory integrals. The improved method per se, however, has not been elucidated in its attractive features, which here we reveal with new results and its detailed algorithm. A comparison with a method of C-C type for definite integrals Q ( f ; 1 , ω ) due to Domínguez etźal. (2011) suggests that a smaller number of computations is required in our method. This is achieved by exploiting recurrence and normalization relations and their associated linear system. We show their convergence and stability properties and give a verified truncation error bound for a result computed from the linear system with finite dimension. For f ( z ) analytic on and inside an ellipse in the complex plane z the error of the approximation to Q ( f ; x , ω ) of the improved method is shown to be bounded uniformly. Numerical examples illustrate the stability and performance of the method.
An improved algorithm for the evaluation of Cauchy principal value integrals of oscillatory functions and its application. A new interpolatory-type quadrature rule is proposed for the numerical evaluation of Cauchy principal value integrals of oscillatory kind ⨍−11f(x)x−τeiωxdx, where τ∈(−1,1). The method is based on an interpolatory procedure at Clenshaw–Curtis points and the singular point, and the fast computation of the modified moments with Cauchy type singularity. Based on this result, a new method is presented for the computation of the oscillatory integrals with logarithmic singularities too. These methods enjoy fast implementation and high accuracy. Convergence rates on ω are also provided. Numerical examples support the theoretical analyses.
Convergence Properties Of Gaussian Quadrature-Formulas
Rough and ready error estimates in Gaussian integration of analytic functions Two expressions are derived for use in estimating the error in the numerical integration of analytic functions in terms of the maximum absolute value of the function in an appropriate region of regularity. These expressions are then specialized to the case of Gaussian integration rules, and the resulting error estimates are compared with those obtained by the use of tables of error coefficients.
Efficient integration for a class of highly oscillatory integrals. This paper presents some quadrature methods for a class of highly oscillatory integrals whose integrands may have singularities at the two endpoints of the interval. One is a Filon-type method based on the asymptotic expansion. The other is a Clenshaw–Curtis–Filon-type method which is based on a special Hermite interpolation polynomial and can be evaluated efficiently in O(NlogN) operations, where N+1 is the number of Clenshaw–Curtis points in the interval of integration. In addition, we derive the corresponding error bound in inverse powers of the frequency ω for the Clenshaw–Curtis–Filon-type method for the class of highly oscillatory integrals. The efficiency and the validity of these methods are testified by both the numerical experiments and the theoretical results.
Neural networks and approximation theory
Recycling Krylov Subspaces for Sequences of Linear Systems Many problems in science and engineering require the solution of a long sequence of slowly changing linear systems. We propose and analyze two methods that significantly reduce the total number of matrix-vector products required to solve all systems. We consider the general case where both the matrix and right-hand side change, and we make no assumptions regarding the change in the right-hand sides. Furthermore, we consider general nonsingular matrices, and we do not assume that all matrices are pairwise close or that the sequence of matrices converges to a particular matrix. Our methods work well under these general assumptions, and hence form a significant advancement with respect to related work in this area. We can reduce the cost of solving subsequent systems in the sequence by recycling selected subspaces generated for previous systems. We consider two approaches that allow for the continuous improvement of the recycled subspace at low cost. We consider both Hermitian and non-Hermitian problems, and we analyze our algorithms both theoretically and numerically to illustrate the effects of subspace recycling. We also demonstrate the effectiveness of our algorithms for a range of applications from computational mechanics, materials science, and computational physics.
Extensible Lattice Sequences for Quasi-Monte Carlo Quadrature Integration lattices are one of the main types of low discrepancy sets used in quasi-Monte Carlo methods. However, they have the disadvantage of being of fixed size. This article describes the construction of an infinite sequence of points, the first bm of which forms a lattice for any nonnegative integer m. Thus, if the quadrature error using an initial lattice is too large, the lattice can be extended without discarding the original points. Generating vectors for extensible lattices are found by minimizing a loss function based on some measure of discrepancy or nonuniformity of the lattice. The spectral test used for finding pseudorandom number generators is one important example of such a discrepancy. The performance of the extensible lattices proposed here is compared to that of other methods for some practical quadrature problems.
Numerical schemes for dynamically orthogonal equations of stochastic fluid and ocean flows The quantification of uncertainties is critical when systems are nonlinear and have uncertain terms in their governing equations or are constrained by limited knowledge of initial and boundary conditions. Such situations are common in multiscale, intermittent and non-homogeneous fluid and ocean flows. The dynamically orthogonal (DO) field equations provide an adaptive methodology to predict the probability density functions of such flows. The present work derives efficient computational schemes for the DO methodology applied to unsteady stochastic Navier-Stokes and Boussinesq equations, and illustrates and studies the numerical aspects of these schemes. Semi-implicit projection methods are developed for the mean and for the DO modes, and time-marching schemes of first to fourth order are used for the stochastic coefficients. Conservative second-order finite-volumes are employed in physical space with new advection schemes based on total variation diminishing methods. Other results include: (i) the definition of pseudo-stochastic pressures to obtain a number of pressure equations that is linear in the subspace size instead of quadratic; (ii) symmetric advection schemes for the stochastic velocities; (iii) the use of generalized inversion to deal with singular subspace covariances or deterministic modes; and (iv) schemes to maintain orthonormal modes at the numerical level. To verify our implementation and study the properties of our schemes and their variations, a set of stochastic flow benchmarks are defined including asymmetric Dirac and symmetric lock-exchange flows, lid-driven cavity flows, and flows past objects in a confined channel. Different Reynolds number and Grashof number regimes are employed to illustrate robustness. Optimal convergence under both time and space refinements is shown as well as the convergence of the probability density functions with the number of stochastic realizations.
Residual implications on the set of discrete fuzzy numbers In this paper residual implications defined on the set of discrete fuzzy numbers whose support is a set of consecutive natural numbers are studied. A specific construction of these implications is given and some examples are presented showing in particular that such a construction generalizes the case of interval-valued residual implications. The most usual properties for these operations are investigated leading to a residuated lattice structure on the set of discrete fuzzy numbers, that in general is not an MTL-algebra.
Fuzzy homomorphisms of algebras In this paper we consider fuzzy relations compatible with algebraic operations, which are called fuzzy relational morphisms. In particular, we aim our attention to those fuzzy relational morphisms which are uniform fuzzy relations, called uniform fuzzy relational morphisms, and those which are partially uniform F-functions, called fuzzy homomorphisms. Both uniform fuzzy relations and partially uniform F-functions were introduced in a recent paper by us. Uniform fuzzy relational morphisms are especially interesting because they can be conceived as fuzzy congruences which relate elements of two possibly different algebras. We give various characterizations and constructions of uniform fuzzy relational morphisms and fuzzy homomorphisms, we establish certain relationships between them and fuzzy congruences, and we prove homomorphism and isomorphism theorems concerning them. We also point to some applications of uniform fuzzy relational morphisms.
Analysis of Power Supply Noise in the Presence of Process Variations This article presents a comprehensive methodology for analyzing the impact of device and metal process variations on the power supply noise and hence the signal integrity of on-chip power grids. This approach models the power grid using modified nodal-analysis equations, and is based on representing the voltage response as an orthogonal polynomial series in the process variables. The series is truncated, and coefficients of the series are optimally obtained by using the Galerkin method. The authors thus obtain an analytical representation of the voltage response in the process variables that can be directly sampled to obtain the voltage response at different process corners. The authors have verified their analysis exhaustively on several industrial power grids as large as 1.3 million nodes, and considering up to 20 process variables. Results from their method demonstrate a very good match with those from Monte Carlo simulations, while providing significant speedups of the order of 100 to 1,000 times for comparable accuracy.
1.019473
0.012323
0.011829
0.009091
0.006127
0.002664
0.000499
0.00001
0.000002
0.000001
0
0
0
0
Efficient Localization of Discontinuities in Complex Computational Simulations. Surrogate models for computational simulations are input-output approximations that allow computationally intensive analyses, such as uncertainty propagation and inference, to be performed efficiently. When a simulation output does not depend smoothly on its inputs, the error and convergence rate of many approximation methods deteriorate substantially. This paper details a method for efficiently localizing discontinuities in the input parameter domain, so that the model output can be approximated as a piecewise smooth function. The approach comprises an initialization phase, which uses polynomial annihilation to assign function values to different regions and thus seed an automated labeling procedure, followed by a refinement phase that adaptively updates a kernel support vector machine representation of the separating surface via active learning. The overall approach avoids structured grids and exploits any available simplicity in the geometry of the separating surface, thus reducing the number of model evaluations required to localize the discontinuity. The method is illustrated on examples of up to eleven dimensions, including algebraic models and ODE/PDE systems, and demonstrates improved scaling and efficiency over other discontinuity localization approaches.
Segmentation of Stochastic Images using Level Set Propagation with Uncertain Speed We present an approach for the evolution of level sets under an uncertain velocity leading to stochastic level sets. The uncertain velocity can either be a random variable or a random field, i.e. a spatially varying random quantity, and it may result from measurement errors, noise, unknown material parameters or other sources of uncertainty. The use of stochastic level sets for the segmentation of images with uncertain gray values leads to stochastic domains, because the zero level set is not a single closed curve anymore. Instead, we have a band of possibly infinite thickness which contains all possible locations of the zero level set under the uncertainty. Thus, the approach allows for a probabilistic description of the segmented volume and the shape of the object. Due to numerical reasons, we use a parabolic approximation of the stochastic level set equation, which is a stochastic partial differential equation, and discretized the equation using the polynomial chaos and a stochastic finite difference scheme. For the verification of the intrusive discretization in the polynomial chaos we performed Monte Carlo and Stochastic Collocation simulations. We demonstrate the power of the stochastic level set approach by showing examples ranging from artificial tests to demonstrate individual aspects to a segmentation of objects in medical images.
Discontinuity detection in multivariate space for stochastic simulations Edge detection has traditionally been associated with detecting physical space jump discontinuities in one dimension, e.g. seismic signals, and two dimensions, e.g. digital images. Hence most of the research on edge detection algorithms is restricted to these contexts. High dimension edge detection can be of significant importance, however. For instance, stochastic variants of classical differential equations not only have variables in space/time dimensions, but additional dimensions are often introduced to the problem by the nature of the random inputs. The stochastic solutions to such problems sometimes contain discontinuities in the corresponding random space and a prior knowledge of jump locations can be very helpful in increasing the accuracy of the final solution. Traditional edge detection methods typically require uniform grid point distribution. They also often involve the computation of gradients and/or Laplacians, which can become very complicated to compute as the number of dimensions increases. The polynomial annihilation edge detection method, on the other hand, is more flexible in terms of its geometric specifications and is furthermore relatively easy to apply. This paper discusses the numerical implementation of the polynomial annihilation edge detection method to high dimensional functions that arise when solving stochastic partial differential equations.
Minimal multi-element stochastic collocation for uncertainty quantification of discontinuous functions We propose a multi-element stochastic collocation method that can be applied in high-dimensional parameter space for functions with discontinuities lying along manifolds of general geometries. The key feature of the method is that the parameter space is decomposed into multiple elements defined by the discontinuities and thus only the minimal number of elements are utilized. On each of the resulting elements the function is smooth and can be approximated using high-order methods with fast convergence properties. The decomposition strategy is in direct contrast to the traditional multi-element approaches which define the sub-domains by repeated splitting of the axes in the parameter space. Such methods are more prone to the curse-of-dimensionality because of the fast growth of the number of elements caused by the axis based splitting. The present method is a two-step approach. Firstly a discontinuity detector is used to partition parameter space into disjoint elements in each of which the function is smooth. The detector uses an efficient combination of the high-order polynomial annihilation technique along with adaptive sparse grids, and this allows resolution of general discontinuities with a smaller number of points when the discontinuity manifold is low-dimensional. After partitioning, an adaptive technique based on the least orthogonal interpolant is used to construct a generalized Polynomial Chaos surrogate on each element. The adaptive technique reuses all information from the partitioning and is variance-suppressing. We present numerous numerical examples that illustrate the accuracy, efficiency, and generality of the method. When compared against standard locally-adaptive sparse grid methods, the present method uses many fewer number of collocation samples and is more accurate.
Beyond Wiener---Askey Expansions: Handling Arbitrary PDFs In this paper we present a Multi-Element generalized Polynomial Chaos (ME-gPC) method to deal with stochastic inputs with arbitrary probability measures. Based on the decomposition of the random space of the stochastic inputs, we construct numerically a set of orthogonal polynomials with respect to a conditional probability density function (PDF) in each element and subsequently implement generalized Polynomial Chaos (gPC) locally. Numerical examples show that ME-gPC exhibits both p- and h-convergence for arbitrary probability measures
Simplex Stochastic Collocation with Random Sampling and Extrapolation for Nonhypercube Probability Spaces Stochastic collocation (SC) methods for uncertainty quantification (UQ) in computational problems are usually limited to hypercube probability spaces due to the structured grid of their quadrature rules. Nonhypercube probability spaces with an irregular shape of the parameter domain do, however, occur in practical engineering problems. For example, production tolerances and other geometrical uncertainties can lead to correlated random inputs on nonhypercube domains. In this paper, a simplex stochastic collocation (SSC) method is introduced, as a multielement UQ method based on simplex elements, that can efficiently discretize nonhypercube probability spaces. It combines the Delaunay triangulation of randomized sampling at adaptive element refinements with polynomial extrapolation to the boundaries of the probability domain. The robustness of the extrapolation is quantified by the definition of the essentially extremum diminishing (EED) robustness principle. Numerical examples show that the resulting SSC-EED method achieves superlinear convergence and a linear increase of the initial number of samples with increasing dimensionality. These properties are demonstrated for uniform and nonuniform distributions, and correlated and uncorrelated parameters in problems with 15 dimensions and discontinuous responses.
Multivariate quadrature on adaptive sparse grids In this paper, we study the potential of adaptive sparse grids for multivariate numerical quadrature in the moderate or high dimensional case, i.e. for a number of dimensions beyond three and up to several hundreds. There, conventional methods typically suffer from the curse of dimension or are unsatisfactory with respect to accuracy. Our sparse grid approach, based upon a direct higher order discretization on the sparse grid, overcomes this dilemma to some extent, and introduces additional flexibility with respect to both the order of the 1 D quadrature rule applied (in the sense of Smolyak's tensor product decomposition) and the placement of grid points. The presented algorithm is applied to some test problems and compared with other existing methods.
A proposal for improving the accuracy of linguistic modeling We propose accurate linguistic modeling, a methodology to design linguistic models that are accurate to a high degree and may be suitably interpreted. This approach is based on two main assumptions related to the interpolative reasoning developed by fuzzy rule-based systems: a small change in the structure of the linguistic model based on allowing the linguistic rule to have two consequents associated; and a different way to obtain the knowledge base based on generating a preliminary fuzzy rule set composed of a large number of rules and then selecting the subset of them best cooperating. Moreover, we introduce two variants of an automatic design method for these kinds of linguistic models based on two well-known inductive fuzzy rule generation processes and a genetic process for selecting rules. The accuracy of the proposed methods is compared with other linguistic modeling techniques with different characteristics when solving of three different applications
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Statistical ordering of correlated timing quantities and its application for path ranking Correct ordering of timing quantities is essential for both timing analysis and design optimization in the presence of process variation, because timing quantities are no longer a deterministic value, but a distribution. This paper proposes a novel metric, called tiered criticalities, which guarantees to provide a unique order for a set of correlated timing quantities while properly taking into account full process space coverage. Efficient algorithms are developed to compute this metric, and its effectiveness on path ranking for at-speed testing is also demonstrated.
Bayesian compressive sensing for cluster structured sparse signals In traditional framework of compressive sensing (CS), only sparse prior on the property of signals in time or frequency domain is adopted to guarantee the exact inverse recovery. Other than sparse prior, structures on the sparse pattern of the signal have also been used as an additional prior, called model-based compressive sensing, such as clustered structure and tree structure on wavelet coefficients. In this paper, the cluster structured sparse signals are investigated. Under the framework of Bayesian compressive sensing, a hierarchical Bayesian model is employed to model both the sparse prior and cluster prior, then Markov Chain Monte Carlo (MCMC) sampling is implemented for the inference. Unlike the state-of-the-art algorithms which are also taking into account the cluster prior, the proposed algorithm solves the inverse problem automatically-prior information on the number of clusters and the size of each cluster is unknown. The experimental results show that the proposed algorithm outperforms many state-of-the-art algorithms.
COSIMAB2B " Sales Automation for E-Procurement We present a fully automated electronic sales agent for e-procurement portals. The key technologies for this breakthrough are based on preferences modeled as strict partial orders, enabling a deep personalization of the B2B sales process. The interplay of several novel middleware components achieves to automate skills that so far could be executed only by a human vendor. As personalized search engine for XML based e-catalogs we use Preference XPath; the Preference Presenter implements a sales psychology based presentation of search results, supporting various human sales strategies; the Preference Repository provides the management of situated long-term preferences; the flexible Personalized Price Offer and the multi-objective Preference Bargainer provide a personalized price fixing and the opportunity to bargain about the price of an entire product bundle, applying up/cross and down selling techniques. Our prototype COSIMA^B2B, supported by industrial partners, has been successfully demonstrated at a large computer fair.
A fuzzy multi-criteria group decision making framework for evaluating health-care waste disposal alternatives Nowadays, as in all other organizations, the amount of waste generated in the health-care institutions is rising due to their extent of service. Medical waste management is a common problem of developing countries including Turkey, which are becoming increasingly conscious that health-care wastes require special treatment. Accordingly, one of the most important problems encountered in Istanbul, the most crowded metropolis of Turkey, is the disposal of health-care waste (HCW) from health-care institutions. Evaluating HCW disposal alternatives, which considers the need to trade-off multiple conflicting criteria with the involvement of a group of experts, is a highly important multi-criteria group decision making problem. The inherent imprecision and vagueness in criteria values concerning HCW disposal alternatives justify the use of fuzzy set theory. This paper presents a fuzzy multi-criteria group decision making framework based on the principles of fuzzy measure and fuzzy integral for evaluating HCW treatment alternatives for Istanbul. In group decision making problems, aggregation of expert opinions is essential for properly conducting the evaluation process. In this study, the ordered weighted averaging (OWA) operator is used to aggregate decision makers' opinions. Economic, technical, environmental and social criteria and their related sub-criteria are employed to assess HCW treatment alternatives, namely ''incineration'', ''steam sterilization'', ''microwave'', and ''landfill''. A comparative analysis is presented using another classical operator to aggregate decision makers' preferences.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.11
0.1
0.024444
0.02
0.006
0.002
0.000026
0
0
0
0
0
0
0
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.
Guaranteed clustering and biclustering via semidefinite programming Identifying clusters of similar objects in data plays a significant role in a wide range of applications. As a model problem for clustering, we consider the densest $$k$$ k -disjoint-clique problem, whose goal is to identify the collection of $$k$$ k disjoint cliques of a given weighted complete graph maximizing the sum of the densities of the complete subgraphs induced by these cliques. In this paper, we establish conditions ensuring exact recovery of the densest $$k$$ k cliques of a given graph from the optimal solution of a particular semidefinite program. In particular, the semidefinite relaxation is exact for input graphs corresponding to data consisting of $$k$$ k large, distinct clusters and a smaller number of outliers. This approach also yields a semidefinite relaxation with similar recovery guarantees for the biclustering problem. Given a set of objects and a set of features exhibited by these objects, biclustering seeks to simultaneously group the objects and features according to their expression levels. This problem may be posed as that of partitioning the nodes of a weighted bipartite complete graph such that the sum of the densities of the resulting bipartite complete subgraphs is maximized. As in our analysis of the densest $$k$$ k -disjoint-clique problem, we show that the correct partition of the objects and features can be recovered from the optimal solution of a semidefinite program in the case that the given data consists of several disjoint sets of objects exhibiting similar features. Empirical evidence from numerical experiments supporting these theoretical guarantees is also provided.
Multireference alignment using semidefinite programming The multireference alignment problem consists of estimating a signal from multiple noisy shifted observations. Inspired by existing Unique-Games approximation algorithms, we provide a semidefinite program (SDP) based relaxation which approximates the maximum likelihood estimator (MLE) for the multireference alignment problem. Although we show this MLE problem is Unique-Games hard to approximate within any constant, we observe that our poly-time approximation algorithm for this problem appears to perform quite well in typical instances, outperforming existing methods. In an attempt to explain this behavior we provide stability guarantees for our SDP under a random noise model on the observations. This case is more challenging to analyze than traditional semi-random instances of Unique-Games: the noise model is on vertices of a graph and translates into dependent noise on the edges. Interestingly, we show that if certain positivity constraints in the relaxation are dropped, its solution becomes equivalent to performing phase correlation, a popular method used for pairwise alignment in imaging applications. Finally, we describe how symmetry reduction techniques from matrix representation theory can greatly decrease the computational cost of the SDP considered.
Nuclear norm minimization for the planted clique and biclique problems We consider the problems of finding a maximum clique in a graph and finding a maximum-edge biclique in a bipartite graph. Both problems are NP-hard. We write both problems as matrix-rank minimization and then relax them using the nuclear norm. This technique, which may be regarded as a generalization of compressive sensing, has recently been shown to be an effective way to solve rank optimization problems. In the special case that the input graph has a planted clique or biclique (i.e., a single large clique or biclique plus diversionary edges), our algorithm successfully provides an exact solution to the original instance. For each problem, we provide two analyses of when our algorithm succeeds. In the first analysis, the diversionary edges are placed by an adversary. In the second, they are placed at random. In the case of random edges for the planted clique problem, we obtain the same bound as Alon, Krivelevich and Sudakov as well as Feige and Krauthgamer, but we use different techniques.
New Null Space Results and Recovery Thresholds for Matrix Rank Minimization Nuclear norm minimization (NNM) has recently gained significant attention for its use in rank minimization problems. Similar to compressed sensing, using null space characterizations, recovery thresholds for NNM have been studied in \cite{arxiv,Recht_Xu_Hassibi}. However simulations show that the thresholds are far from optimal, especially in the low rank region. In this paper we apply the recent analysis of Stojnic for compressed sensing \cite{mihailo} to the null space conditions of NNM. The resulting thresholds are significantly better and in particular our weak threshold appears to match with simulation results. Further our curves suggest for any rank growing linearly with matrix size $n$ we need only three times of oversampling (the model complexity) for weak recovery. Similar to \cite{arxiv} we analyze the conditions for weak, sectional and strong thresholds. Additionally a separate analysis is given for special case of positive semidefinite matrices. We conclude by discussing simulation results and future research directions.
Exact Matrix Completion via Convex Optimization We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
Tensor rank is NP-complete We prove that computing the rank of a three-dimensional tensor over any finite field is NP-complete. Over the rational numbers the problem is NP-hard.
A multiscale framework for Compressive Sensing of video Compressive Sensing (CS) allows the highly efficient acquisition of many signals that could be difficult to capture or encode using conventional methods. From a relatively small number of random measurements, a high-dimensional signal can be recovered if it has a sparse or near-sparse representation in a basis known to the decoder. In this paper, we consider the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems. In standard video compression, motion compensation and estimation techniques have led to improved sparse representations that are more easily compressible; we adapt these techniques for the problem of CS recovery. Using a coarse-to-fine reconstruction algorithm, we alternate between the tasks of motion estimation and motion-compensated wavelet-domain signal recovery. We demonstrate that our algorithm allows the recovery of video sequences from fewer measurements than either frame-by-frame or inter-frame difference recovery methods.
Compact model order reduction of weakly nonlinear systems by associated transform AbstractWe advance a recently proposed approach, called the associated transform, for computing slim projection matrices serving high-order Volterra transfer functions in the context of weakly nonlinear model order reduction NMOR. The innovation is to carry out an association of multivariate Laplace variables in high-order multiple-input multiple-output transfer functions to generate univariate single-s transfer functions. In contrast to conventional projection-based NMOR which finds projection subspaces about every si in multivariate transfer functions, only that about a single s is required in the proposed approach. This leads to much more compact reduced-order models without compromising accuracy. Specifically, the proposed NMOR procedure first converts the original set of Volterra transfer functions into a new set of linear transfer functions, which then allows direct utilization of linear MOR techniques for modeling weakly nonlinear systems with either single-tone or multi-tone inputs. An adaptive algorithm is also given to govern the selection of appropriate basis orders in different Volterra transfer functions. Numerical examples then verify the effectiveness of the proposed scheme. Copyright © 2015 John Wiley & Sons, Ltd.
Optimal design of a CMOS op-amp via geometric programming We describe a new method for determining component values and transistor dimensions for CMOS operational amplifiers (op-amps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result, the amplifier design problem can be expressed as a special form of optimization problem called geometric programming, for which very efficient global optimization methods have been developed. As a consequence we can efficiently determine globally optimal amplifier designs or globally optimal tradeoffs among competing performance measures such as power, open-loop gain, and bandwidth. Our method, therefore, yields completely automated sizing of (globally) optimal CMOS amplifiers, directly from specifications. In this paper, we apply this method to a specific widely used operational amplifier architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeoff curves relating performance measures such as power dissipation, unity-gain bandwidth, and open-loop gain. We show how the method can he used to size robust designs, i.e., designs guaranteed to meet the specifications for a variety of process conditions and parameters
Stochastic Behavioral Modeling and Analysis for Analog/Mixed-Signal Circuits It has become increasingly challenging to model the stochastic behavior of analog/mixed-signal (AMS) circuits under large-scale process variations. In this paper, a novel moment-matching-based method has been proposed to accurately extract the probabilistic behavioral distributions of AMS circuits. This method first utilizes Latin hypercube sampling coupling with a correlation control technique to generate a few samples (e.g., sample size is linear with number of variable parameters) and further analytically evaluate the high-order moments of the circuit behavior with high accuracy. In this way, the arbitrary probabilistic distributions of the circuit behavior can be extracted using moment-matching method. More importantly, the proposed method has been successfully applied to high-dimensional problems with linear complexity. The experiments demonstrate that the proposed method can provide up to 1666X speedup over crude Monte Carlo method for the same accuracy.
Speaker Verification Using Adapted Gaussian Mixture Models Reynolds, Douglas A., Quatieri, Thomas F., and Dunn, Robert B., Speaker Verification Using Adapted Gaussian Mixture Models, Digital Signal Processing10(2000), 19 41.In this paper we describe the major elements of MIT Lincoln Laboratory's Gaussian mixture model (GMM)-based speaker verification system used successfully in several NIST Speaker Recognition Evaluations (SREs). The system is built around the likelihood ratio test for verification, using simple but effective GMMs for likelihood functions, a universal background model (UBM) for alternative speaker representation, and a form of Bayesian adaptation to derive speaker models from the UBM. The development and use of a handset detector and score normalization to greatly improve verification performance is also described and discussed. Finally, representative performance benchmarks and system behavior experiments on NIST SRE corpora are presented.
Explicit construction of a small epsilon-net for linear threshold functions We give explicit constructions of epsilon nets for linear threshold functions on the binary cube and on the unit sphere. The size of the constructed nets is polynomial in the dimension n and in 1 . To the best of our knowledge no such constructions were previously known. Our results match, up to the exponent of the polynomial, the bounds that are achieved by probabilistic arguments. As a corollary we also construct subsets of the binary cube that have size polynomial in n and covering radius of n 2 c p nlogn, for any constant c. This improves upon the well known construction of dual BCH codes that only guarantee covering radius of n 2 c p n.
Interval-Based Models for Decision Problems Uncertainty in decision problems has been handled by probabilities with respect to unknown state of nature such as demands in market having several scenarios. Standard decision theory can not deal with non-stochastic uncertainty, indeterminacy and ignorance of the given phenomenon. Also, probability needs many data under the same situation. Recently, economical situation changes rapidly so that it is hard to collect many data under the same situation. Therefore instead of conventional ways, interval-based models for decision problems are explained as dual models in this paper. First, interval regression models are described as a kind of decision problems. Then, using interval regression analysis, interval weights in AHP (Analytic Hierarchy Process) can be obtained to reflect intuitive judgments given by an estimator. This approach is called interval AHP where the normality condition of interval weights is used. This normality condition can be regarded as interval probabilities. Thus, finally some basic definitions of interval probability in decision problems are shown in this paper.
1.006582
0.007675
0.007675
0.006684
0.004878
0.00337
0.001718
0.000479
0.000086
0.000018
0.000001
0
0
0
Multimedia-based interactive advising technology for online consumer decision support Multimedia technologies (such as Flash and QuickTime) have been widely used in online product presentation and promotion to portray products in a dynamic way. The continuous visual stimuli and associated sound effects provide vivid and interesting product presentations; hence, they engage online customers in examining products. Meanwhile, recent research has indicated that online shoppers want detailed and relevant product information and explanations [2]. A promising approach is to embed rich product information and explanations into multimedia-enhanced product demonstrations. This approach is called Multimedia-based Product Annotation (MPA), a product presentation in which customers can retrieve embedded product information in a multimedia context.
The Scientific Community Metaphor Scientific communities have proven to be extremely successful at solving problems. They are inherently parallel systems and their macroscopic nature makes them amenable to careful study. In this paper the character of scientific research is examined drawing on sources in the philosophy and history of science. We maintain that the success of scientific research depends critically on its concurrency and pluralism. A variant of the language Ether is developed that embodies notions of concurrency necessary to emulate some of the problem solving behavior of scientific communities. Capabilities of scientific communities are discussed in parallel with simplified models of these capabilities in this language.
On Agent-Mediated Electronic Commerce This paper surveys and analyzes the state of the art of agent-mediated electronic commerce (e-commerce), concentrating particularly on the business-to-consumer (B2C) and business-to-business (B2B) aspects. From the consumer buying behavior perspective, agents are being used in the following activities: need identification, product brokering, buyer coalition formation, merchant brokering, and negotiation. The roles of agents in B2B e-commerce are discussed through the business-to-business transaction model that identifies agents as being employed in partnership formation, brokering, and negotiation. Having identified the roles for agents in B2C and B2B e-commerce, some of the key underpinning technologies of this vision are highlighted. Finally, we conclude by discussing the future directions and potential impediments to the wide-scale adoption of agent-mediated e-commerce.
Janus - A Paradigm For Active Decision Support Active decision support is concerned with developing advanced forms of decision support where the support tools are capable of actively participating in the decision making process, and decisions are made by fruitful collaboration between the human and the machine. It is currently an active and leading area of research within the field of decision support systems. The objective of this paper is to share the details of our research in this area. We present our overall research strategy for exploring advanced forms of decision support and discuss in detail our research prototype called JANUS that implements our ideas. We establish the contributions of our work and discuss our experiences and plans for future.
Implications of buyer decision theory for design of e-commerce websites In the rush to open their website, e-commerce sites too often fail to support buyer decision making and search, resulting in a loss of sale and the customer's repeat business. This paper reviews why this occurs and the failure of many B2C and B2B website executives to understand that appropriate decision support and search technology can't be fully bought off-the-shelf. Our contention is that significant investment and effort is required at any given website in order to create the decision support and search agents needed to properly support buyer decision making. We provide a framework to guide such effort (derived from buyer behavior choice theory); review the open problems that e-catalog sites pose to the framework and to existing search engine technology; discuss underlying design principles and guidelines; validate the framework and guidelines with a case study; and discuss lessons learned and steps needed to better support buyer decision behavior in the future. Future needs are also pinpointed.
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
Fuzzy logic systems for engineering: a tutorial A fuzzy logic system (FLS) is unique in that it is able to simultaneously handle numerical data and linguistic knowledge. It is a nonlinear mapping of an input data (feature) vector into a scalar output, i.e., it maps numbers into numbers. Fuzzy set theory and fuzzy logic establish the specifics of the nonlinear mapping. This tutorial paper provides a guided tour through those aspects of fuzzy sets and fuzzy logic that are necessary to synthesize an FLS. It does this by starting with crisp set theory and dual logic and demonstrating how both can be extended to their fuzzy counterparts. Because engineering systems are, for the most part, causal, we impose causality as a constraint on the development of the FLS. After synthesizing a FLS, we demonstrate that it can be expressed mathematically as a linear combination of fuzzy basis functions, and is a nonlinear universal function approximator, a property that it shares with feedforward neural networks. The fuzzy basis function expansion is very powerful because its basis functions can be derived from either numerical data or linguistic knowledge, both of which can be cast into the forms of IF-THEN rules
Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.
Coding Algorithms for 3DTV—A Survey Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development o- f algorithms.
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an $n$-dimensional vector “decouples” as $n$ scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
Preferences and their application in evolutionary multiobjective optimization The paper describes a new preference method and its use in multiobjective optimization. These preferences are developed with a goal to reduce the cognitive overload associated with the relative importance of a certain criterion within a multiobjective design environment involving large numbers of objectives. Their successful integration with several genetic-algorithm-based design search and optimi...
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Performance-oriented parameter dimension reduction of VLSI circuits To account for the growing process variability in modern VLSI technologies, circuit models parameterized in a multitude of parametric variations are becoming increasingly indispensable in robust circuit design. However, the high parameter dimensionality can introduce significant complexity and may even render variation-aware performance analysis and optimization completely intractable. We present a performance-oriented parameter dimension reduction framework to reduce the modeling complexity associated with high parameter dimensionality. Our framework has a theoretically sound statistical basis, namely, reduced rank regression (RRR) and its various extensions that we have introduced for more practical VLSI circuit modeling. For a variety of VLSI circuits including interconnects and CMOS digital circuits, it is shown that this parameter reduction framework can provide more than one order of magnitude reduction in parameter dimensionality. Such parameter reduction immediately leads to reduced simulation cost in sampling-based performance analysis, and more importantly, highly efficient parameterized subcircuit models that are instrumental in tackling the complexity of variation-tolerance VLSI system design.
Accounting for non-linear dependence using function driven component analysis Majority of practical multivariate statistical analyses and optimizations model interdependence among random variables in terms of the linear correlation among them. Though linear correlation is simple to use and evaluate, in several cases non-linear dependence between random variables may be too strong to ignore. In this paper, We propose polynomial correlation coefficients as simple measure of multivariable non-linear dependence and show that need for modeling non-linear dependence strongly depends on the end function that is to be evaluated from the random variables. Then, we calculate the errors in estimation which result from assuming independence of components generated by linear de-correlation techniques such as PCA and ICA. The experimental result shows that the error predicted by our method is within 1% error compared to the real simulation. In order to deal with non-linear dependence, we further develop a target function driven component analysis algorithm (FCA) to minimize the error caused by ignoring high order dependence and apply such technique to statistical leakage power analysis and SRAM cell noise margin variation analysis. Experimental results show that the proposed FCA method is more accurate compared to the traditional PCA or ICA.
An On-the-Fly Parameter Dimension Reduction Approach to Fast Second-Order Statistical Static Timing Analysis While first-order statistical static timing analysis (SSTA) techniques enjoy good runtime efficiency desired for tackling large industrial designs, more accurate second-order SSTA techniques have been proposed to improve the analysis accuracy, but at the cost of high computational complexity. Although many sources of variations may impact the circuit performance, considering a large number of inter- and intra-die variations in the traditional SSTA is very challenging. In this paper, we address the analysis complexity brought by high parameter dimensionality in SSTA and propose an accurate yet fast second-order SSTA algorithm based on novel on-the-fly parameter dimension reduction techniques. By developing a reduced rank regression (RRR)-based approach and a method of moments (MOM)-based parameter reduction algorithm within the block-based SSTA flow, we demonstrate that accurate second-order SSTA can be extended to a much higher parameter dimensionality than what is possible before. Our experimental results have shown that the proposed parameter reductions can achieve up to 10times parameter dimension reduction and lead to significantly improved second-order SSTA under a large set of process variations.
Statistical static timing analysis using a skew-normal canonical delay model In its simplest form, a parameterized block based statistical static timing analysis (SSTA) is performed by assuming that both gate delays and the arrival times at various nodes are Gaussian random variables. These assumptions are not true in many cases. Quadratic models are used for more accurate analysis, but at the cost of increased computational complexity. In this paper, we propose a model based on skew-normal random variables. It can take into account the skewness in the gate delay distribution as well as the nonlinearity of the MAX operation. We derive analytical expressions for the moments of the MAX operator based on the conditional expectations. The computational complexity of using this model is marginally higher than the linear model based on Clark's approximations. The results obtained using this model match well with Monte-Carlo simulations.
Estimation of delay variations due to random-dopant fluctuations in nano-scaled CMOS circuits In nanoscale CMOS circuits the random dopant fluctuations (RDF) cause significant threshold voltage (Vt) variations in transistors. In this paper, we propose a semi-analytical estimation methodology to predict the delay distribution [Mean and Standard Deviation (STD)] of logic circuits considering Vt variation in transistors. The proposed method is fast and can be used to predict delay distributio...
VGTA: Variation Aware Gate Timing Analysis As technology scales down, timing verification of digital integrated circuits becomes an extremely difficult task due to gate and wire variability. Therefore, statistical timing analysis is inevitable. Most timing tools divide the analysis into two parts: 1) interconnect (wire) timing analysis and 2) gate timing analysis. Variational interconnect delay calculation for blockbased TA has been recently studied. However, variational gate delay calculation has remained unexplored. In this paper, we propose a new framework to handle the variation-aware gate timing analysis in block-based TA. First, we present an approach to approximate variational RC- load by using a canonical first-order model. Next, an efficient variation-aware effective capacitance calculation based on statistical input transition, statistical gate timing library, and statistical RC- load is presented. In this step, we use a single-iteration Ceff calculation which is efficient and reasonably accurate. Finally we calculate the statistical gate delay and output slew based on the aforementioned model. Experimental results show an average error of 7% for gate delay and output slew with respect to the HSPICE Monte Carlo simulation while the runtime is about 145 times faster.
A New Method for Design of Robust Digital Circuits As technology continues to scale beyond 100nm, there is a significant increase in performance uncertainty of CMOS logic due to process and environmental variations.Traditional circuit optimization methods assuming deterministic gate delays produce a flat "wall" of equally critical paths, resulting in variation-sensitive designs.This paper describes a new method for sizing of digital circuits, with uncertain gate delays, to minimize their performance variation leading to a higher parametric yield.The method is based on adding margins on each gate delay to account for variations and using a new "soft maximum" function to combine path delays at converging nodes.Using analytic models to predict the means and standard deviations of gate delays as posynomial functions of the device sizes, we create a simple, computationally efficient heuristic for uncertainty-aware sizing of digital circuits via Geometric programming.Monte-Carlo simulations on custom 32bit adders and ISCAS'85 benchmarks show that about 10% to 20% delay reduction over deterministic sizing methods can be achieved, without any additional cost in area.
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Duality Theory in Fuzzy Linear Programming Problems with Fuzzy Coefficients The concept of fuzzy scalar (inner) product that will be used in the fuzzy objective and inequality constraints of the fuzzy primal and dual linear programming problems with fuzzy coefficients is proposed in this paper. We also introduce a solution concept that is essentially similar to the notion of Pareto optimal solution in the multiobjective programming problems by imposing a partial ordering on the set of all fuzzy numbers. We then prove the weak and strong duality theorems for fuzzy linear programming problems with fuzzy coefficients.
NanoFabrics: spatial computing using molecular electronics The continuation of the remarkable exponential increases in processing power over the recent past faces imminent challenges due in part to the physics of deep-submicron CMOS devices and the costs of both chip masks and future fabrication plants. A promising solution to these problems is offered by an alternative to CMOS-based computing, chemically assembled electronic nanotechnology (CAEN).In this paper we outline how CAEN-based computing can become a reality. We briefly describe recent work in CAEN and how CAEN will affect computer architecture. We show how the inherently reconfigurable nature of CAEN devices can be exploited to provide high-density chips with defect tolerance at significantly reduced manufacturing costs. We develop a layered abstract architecture for CAEN-based computing devices and we present preliminary results which indicate that such devices will be competitive with CMOS circuits.
Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization. Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.
Fundamentals Of Clinical Methodology: 2. Etiology The concept of etiology is analyzed and the possibilities and limitations of deterministic, probabilistic, and fuzzy etiology are explored. Different kinds of formal structures for the relation of causation are introduced which enable us to explicate the notion of cause on qualitative, comparative, and quantitative levels. The conceptual framework developed is an approach to a theory of causality that may be useful in etiologic research, in building nosological systems, and in differential diagnosis, therapeutic decision-making, and controlled clinical trials. The bearings of the theory are exemplified by examining the current Chlamydia pneumoniae hypothesis on the incidence of myocardial infarction. (C) 1998 Elsevier Science B.V. All rights reserved.
Dominance-based fuzzy rough set analysis of uncertain and possibilistic data tables In this paper, we propose a dominance-based fuzzy rough set approach for the decision analysis of a preference-ordered uncertain or possibilistic data table, which is comprised of a finite set of objects described by a finite set of criteria. The domains of the criteria may have ordinal properties that express preference scales. In the proposed approach, we first compute the degree of dominance between any two objects based on their imprecise evaluations with respect to each criterion. This results in a valued dominance relation on the universe. Then, we define the degree of adherence to the dominance principle by every pair of objects and the degree of consistency of each object. The consistency degrees of all objects are aggregated to derive the quality of the classification, which we use to define the reducts of a data table. In addition, the upward and downward unions of decision classes are fuzzy subsets of the universe. Thus, the lower and upper approximations of the decision classes based on the valued dominance relation are fuzzy rough sets. By using the lower approximations of the decision classes, we can derive two types of decision rules that can be applied to new decision cases.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.11
0.11
0.11
0.02
0.002857
0.000455
0.000041
0
0
0
0
0
0
0
Evaluating Government Websites Based On A Fuzzy Multiple Criteria Decision-Making Approach This paper presents a framework of website quality evaluation for measuring the performance of government websites. Multiple criteria decision-making (MCDM) is a widely used tool for evaluating and ranking problems containing multiple, usually conflicting criteria. In line with the multi-dimensional characteristics of website quality, MCDM provides an effective framework for an inter-websites comparison involving the evaluation of multiple attributes. It thus ranks different websites compared in terms of their overall performance. This paper models the inter-website comparison problem as an MCDM problem, and presents a practical and selective approach to deal with it. In addition, fuzzy logic is applied to the subjectivity and vagueness in the assessment process. The proposed framework is effectively illustrated to rate Turkish government websites.
Multi-criteria analysis for a maintenance management problem in an engine factory: rational choice The industrial organization needs to develop better methods for evaluating the performance of its projects. We are interested in the problems related to pieces with differing degrees of dirt. In this direction, we propose and evaluate a maintenance decision problem of maintenance in an engine factory that is specialized in the production, sale and maintenance of medium and slow speed four stroke engines. The main purpose of this paper is to study the problem by means of the analytic hierarchy process to obtain the weights of criteria, and the TOPSIS method as multicriteria decision making to obtain the ranking of alternatives, when the information was given in linguistic terms.
Project selection for oil-fields development by using the AHP and fuzzy TOPSIS methods The evaluation and selection of projects before investment decision is customarily done using, technical and information. In this paper, proposed a new methodology to provide a simple approach to assess alternative projects and help the decision-maker to select the best one for National Iranian Oil Company by using six criteria of comparing investment alternatives as criteria in an AHP and fuzzy TOPSIS techniques. The AHP is used to analyze the structure of the project selection problem and to determine weights of the criteria, and fuzzy TOPSIS method is used to obtain final ranking. This application is conducted to illustrate the utilization of the model for the project selection problems. Additionally, in the application, it is shown that calculation of the criteria weights is important in fuzzy TOPSIS method and they could change the ranking. The decision-maker can use these different weight combinations in the decision-making process according to priority.
Facility location selection using fuzzy topsis under group decisions This work presents a fuzzy TOPSIS model under group decisions for solving the facility location selection problem, where the ratings of various alternative locations under different subjective attributes and the importance weights of all attributes are assessed in linguistic values represented by fuzzy numbers. The objective attributes are transformed into dimensionless indices to ensure compatibility with the linguistic ratings of the subjective attributes. Furthermore, the membership function of the aggregation of the ratings and weights for each alternative location versus each attribute can be developed by interval arithmetic and α -cuts of fuzzy numbers. The ranking method of the mean of the integral values is applied to help derive the ideal and negative-ideal fuzzy solutions to complete the proposed fuzzy TOPSIS model. Finally, a numerical example demonstrates the computational process of the proposed model.
A comparative analysis of score functions for multiple criteria decision making in intuitionistic fuzzy settings The purpose of this paper was to conduct a comparative study of score functions in multiple criteria decision analysis based on intuitionistic fuzzy sets. The concept of score functions has been conceptualized and widely applied to multi-criteria decision-making problems. There are several types of score functions that can identify the mixed results of positive and negative parts in a bi-dimensional framework of intuitionistic fuzzy sets. Considering various perspectives on score functions, the present study adopts an order of preference based on similarity to the ideal solution as the main structure to estimate the importance of different criteria and compute optimal multi-criteria decisions in intuitionistic fuzzy evaluation settings. An experimental analysis is conducted to examine the relationship between the results yielded by different score functions, considering the average Spearman correlation coefficients and contradiction rates. Furthermore, additional discussions clarify the relative differences in the ranking orders obtained from different combinations of numbers of alternatives and criteria as well as different importance conditions.
Incorporating filtering techniques in a fuzzy linguistic multi-agent model for information gathering on the web In (Computing with Words, Wiley, New York, 2001, p. 251; Soft Comput. 6 (2002) 320; Fuzzy Logic and The Internet, Physica-Verlag, Springer, Wurzburg, Berlin, 2003) we presented different fuzzy linguistic multi-agent models for helping users in their information gathering processes on the Web. In this paper we describe a new fuzzy linguistic multi-agent model that incorporates two information filtering techniques in its structure: a content-based filtering agent and a collaborative filtering agent. Both elements are introduced to increase the information filtering possibilities of multi-agent system on the Web and, in such a way, to improve its retrieval issues.
Combining numerical and linguistic information in group decision making People give information about their personal preferences in many different ways, de- pending on their background. This paper deals with group decision making problems in which the solution depends on information of a different nature, i.e., assuming that the experts express their preferences with numerical or linguistic values. The aim of this pa- per is to present a proposal for this problem. We introduce a fusion operator for numer- ical and linguistic information. This operator combines linguistic values (assessed in the same label set) with numerical ones (assessed in the interval (0,1)). It is based on two transformation methods between numerical and linguistic values, which are defined using the concept of the characteristic values proposed in this paper. Its application to group decision making problems is illustrated by means of a particular fusion oper- ator guided by fuzzy majority. Considering that the experts express their opinions by means of fuzzy or linguistic preference relations, this operator is used to develop a choice process for the alternatives, allowing solutions to be obtained in line with the ma- jority of the experts' opinions. © 1998 Elsevier Science Inc. All rights reserved.
The problem of linguistic approximation in clinical decision making This paper deals with the problem of linguistic approximation in a computerized system the context of medical decision making. The general problem and a few application-oriented solutions have been treated in the literature. After a review of the main approaches (best fit, successive approximations, piecewise decomposition, preference set, fuzzy chopping) some of the unresolved problems are pointed out. The case of deciding upon various diagnostic abnormalities suggested by the analysis of the electrocardiographic signal is then put forward. The linguistic approximation method used in this situation is finally described. Its main merit is its simple (i.e., easily understood) linguistic output, which uses labels whose meaning is rather well established among the users (i.e., the physicians).
Compressed Sensing for Networked Data Imagine a system with thousands or millions of independent components, all capable of generating and communicating data. A man-made system of this complexity was unthinkable a few decades ago, but today it is a reality - computers, cell phones, sensors, and actuators are all linked to the Internet, and every wired or wireless device is capable of generating and disseminating prodigious volumes of data. This system is not a single centrally-controlled device, rather it is an ever-growing patchwork of autonomous systems and components, perhaps more organic in nature than any human artifact that has come before. And we struggle to manage and understand this creation, which in many ways has taken on a life of its own. Indeed, several international conferences are dedicated to the scientific study of emergent Internet phenomena. This article considers a particularly salient aspect of this struggle that revolves around large- scale distributed sources of data and their storage, transmission, and retrieval. The task of transmitting information from one point to another is a common and well-understood exercise. But the problem of efficiently transmitting or sharing information from and among a vast number of distributed nodes remains a great challenge, primarily because we do not yet have well developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems. The problem is illustrated by a simple example. Consider a network of n nodes, each having a piece of information or data xj, j = 1,...,n. These data could be files to be shared, or simply scalar values corresponding to node attributes or sensor measurements. Let us assume that each xj is a scalar quantity for the sake of this illustration. Collectively these data x = (x1,...,xn)T, arranged in a vector, are called networked data to emphasize both the distributed nature of the data and the fact that they may be shared over the underlying communications infrastructure of the network. The networked data vector may be very large; n may be a thousand or a million or more.
Informative Sensing Compressed sensing is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given the distribution of signals we wish to measure, what are the optimal set of linear projections for compressed sensing? We consider the problem of finding a small number of l inear projections that are maximally informative about the signal. Formally, we use the InfoMax criterion and seek to maximize the mutual information between the signal, x, and the (possibly noisy) projection y = Wx. We show that in general the optimal projections are not the principal components of the data nor random projections, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the knowledge of distribution. We present analytic solutions for certain special cases including natural images. In particular, for natural images, the near-optimal projec tions are bandwise random, i.e., incoherent to the sparse bases at a particular frequency band but with more weights on the low-frequencies, which has a physical relation to the multi-resolution representatio n of images.
Scheduling as a fuzzy multiple criteria optimization problem Real-world scheduling is decision making under vague constraints of different importance, often using uncertain data, where compromises between antagonistic criteria are allowed. The author explains in theory and by detailed examples a new combination of fuzzy set based constraints and iterative improvement repair based heuristics that help to model these scheduling problems. The mathematics needed for a method of eliciting the criteria's importances from human experts are simplified. The author introduces a new consistency test for configuration changes. This test also helps to evaluate the sensitivity to configuration changes. The implementation of these concepts in the fuzzy logic inference processor library FLIP++, in the fuzzy constraint library ConFLIP++, in the dynamic constraint generation library DynaFLIP++, and in the heuristic repair library Déjà Vu is described. All these libraries are implemented in a layered framework enhanced by the common user interface InterFLIP++. The benchmark application to compare the fuzzy constraint iterative improvement repair heuristic with constructive method based on classic constraints is a scheduling system for a continuous caster unit in a steel plant.
Fundamentals Of Clinical Methodology: 2. Etiology The concept of etiology is analyzed and the possibilities and limitations of deterministic, probabilistic, and fuzzy etiology are explored. Different kinds of formal structures for the relation of causation are introduced which enable us to explicate the notion of cause on qualitative, comparative, and quantitative levels. The conceptual framework developed is an approach to a theory of causality that may be useful in etiologic research, in building nosological systems, and in differential diagnosis, therapeutic decision-making, and controlled clinical trials. The bearings of the theory are exemplified by examining the current Chlamydia pneumoniae hypothesis on the incidence of myocardial infarction. (C) 1998 Elsevier Science B.V. All rights reserved.
Fuzzy modeling of system behavior for risk and reliability analysis The main objective of the article is to permit the reliability analyst's/engineers/managers/practitioners to analyze the failure behavior of a system in a more consistent and logical manner. To this effect, the authors propose a methodological and structured framework, which makes use of both qualitative and quantitative techniques for risk and reliability analysis of the system. The framework has been applied to model and analyze a complex industrial system from a paper mill. In the quantitative framework, after developing the Petrinet model of the system, the fuzzy synthesis of failure and repair data (using fuzzy arithmetic operations) has been done. Various system parameters of managerial importance such as repair time, failure rate, mean time between failures, availability, and expected number of failures are computed to quantify the behavior in terms of fuzzy, crisp and defuzzified values. Further, to improve upon the reliability and maintainability characteristics of the system, in depth qualitative analysis of systems is carried out using failure mode and effect analysis (FMEA) by listing out all possible failure modes, their causes and effect on system performance. To address the limitations of traditional FMEA method based on risky priority number score, a risk ranking approach based on fuzzy and Grey relational analysis is proposed to prioritize failure causes.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.24
0.24
0.06
0.03
0.003333
0.001296
0.000149
0.000014
0
0
0
0
0
0
A compressive sensing-based reconstruction approach to network traffic Traffic matrix in a network describes the end-to-end network traffic which embodies the network-level status of communication networks from origin to destination nodes. It is an important input parameter of network traffic engineering and is very crucial for network operators. However, it is significantly difficult to obtain the accurate end-to-end network traffic. And thus obtaining traffic matrix precisely is a challenge for operators and researchers. This paper studies the reconstruction method of the end-to-end network traffic based on compressing sensing. A detailed method is proposed to select a set of origin-destination flows to measure at first. Then a reconstruction model is built via these measured origin-destination flows. And a purely data-driven reconstruction algorithm is presented. Finally, we use traffic data from the real backbone network to verify our approach proposed in this paper.
A power laws-based reconstruction approach to end-to-end network traffic To obtain accurately end-to-end network traffic is a significantly difficult and challenging problem for network operators, although it is one of the most important input parameters for network traffic engineering. With the development of current network, the characteristics of networks have changed a lot. In this paper, we exploit the characteristics of origin-destination flows and thus grasp the properties of end-to-end network traffic. An important and amazing find of our work is that the sizes of origin-destination flows obey the power laws. Taking advantage of this characteristic, we propose a novel approach to select partial origin-destination flows which are to be measured directly. In terms of the known traffic information, we reconstruct all origin-destination flows via compressive sensing method. In detail, here, we combine the power laws and the constraints of compressive sensing (namely restricted isometry property) together to build measurement matrix and pick up the partial origin-destination flows. Furthermore, we build a reconstruction model from the known information corresponding to compressive sensing reconstruction algorithms. Finally, we reconstruct all origin-destination flows from the observed results by solving the reconstruction model. And we provide numerical simulation results to validate the performance of our method using real backbone network traffic data. It illustrates that our method can recover the end-to-end network traffic more accurately than previous methods.
Improved Bounds on Restricted Isometry Constants for Gaussian Matrices The restricted isometry constant (RIC) of a matrix $A$ measures how close to an isometry is the action of $A$ on vectors with few nonzero entries, measured in the $\ell^2$ norm. Specifically, the upper and lower RICs of a matrix $A$ of size $n\times N$ are the maximum and the minimum deviation from unity (one) of the largest and smallest, respectively, square of singular values of all ${N\choose k}$ matrices formed by taking $k$ columns from $A$. Calculation of the RIC is intractable for most matrices due to its combinatorial nature; however, many random matrices typically have bounded RIC in some range of problem sizes $(k,n,N)$. We provide the best known bound on the RIC for Gaussian matrices, which is also the smallest known bound on the RIC for any large rectangular matrix. Our results are built on the prior bounds of Blanchard, Cartis, and Tanner [SIAM Rev., to appear], with improvements achieved by grouping submatrices that share a substantial number of columns.
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.
Variations, margins, and statistics Design margining is used to account for design uncertainties in the measurement of performance, and thereby ensures that actual manufactured parts will operate in within predicted bounds. As process and environmental variations become increasingly severe and complex in nanometer process technology, design margining overheads have increased correspondingly. This paper describes the types of process and environmental variations, their impact on performance, and the traditional design margining process used to account for these uncertainties. We consider statistical timing (SSTA) in the context of its ability to reduce timing margins through more accurate modeling of variations, and quantify potential benefits of SSTA for setup and hold time margin reduction. Combining SSTA with complementary techniques for systematic variation-aware and voltage-variation-aware timing provides meaningful design margin reduction. We introduce the concept of activity based operating condition as a supporting construct for variation-aware STA flows
NIST Net: a Linux-based network emulation tool Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.
Scale-Space and Edge Detection Using Anisotropic Diffusion A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image.
Reconstruction of a low-rank matrix in the presence of Gaussian noise. This paper addresses the problem of reconstructing a low-rank signal matrix observed with additive Gaussian noise. We first establish that, under mild assumptions, one can restrict attention to orthogonally equivariant reconstruction methods, which act only on the singular values of the observed matrix and do not affect its singular vectors. Using recent results in random matrix theory, we then propose a new reconstruction method that aims to reverse the effect of the noise on the singular value decomposition of the signal matrix. In conjunction with the proposed reconstruction method we also introduce a Kolmogorov–Smirnov based estimator of the noise variance.
Ranking type-2 fuzzy numbers Type-2 fuzzy sets are a generalization of the ordinary fuzzy sets in which each type-2 fuzzy set is characterized by a fuzzy membership function. In this paper, we consider the problem of ranking a set of type-2 fuzzy numbers. We adopt a statistical viewpoint and interpret each type-2 fuzzy number as an ensemble of ordinary fuzzy numbers. This enables us to define a type-2 fuzzy rank and a type-2 rank uncertainty for each intuitionistic fuzzy number. We show the reasonableness of the results obtained by examining several test cases
On Linear and Semidefinite Programming Relaxations for Hypergraph Matching The hypergraph matching problem is to find a largest collection of disjoint hyperedges in a hypergraph. This is a well-studied problem in combinatorial optimization and graph theory with various applications. The best known approximation algorithms for this problem are all local search algorithms. In this paper we analyze different linear and semidefinite programming relaxations for the hypergraph matching problem, and study their connections to the local search method. Our main results are the following: • We consider the standard linear programming relaxation of the problem. We provide an algorithmic proof of a result of Füredi, Kahn and Seymour, showing that the integrality gap is exactly k-1 + 1/k for k-uniform hypergraphs, and is exactly k - 1 for k-partite hypergraphs. This yields an improved approximation algorithm for the weighted 3-dimensional matching problem. Our algorithm combines the use of the iterative rounding method and the fractional local ratio method, showing a new way to round linear programming solutions for packing problems. • We study the strengthening of the standard LP relaxation by local constraints. We show that, even after linear number of rounds of the Sherali-Adams lift-and-project procedure on the standard LP relaxation, there are k-uniform hypergraphs with integrality gap at least k - 2. On the other hand, we prove that for every constant k, there is a strengthening of the standard LP relaxation by only a polynomial number of constraints, with integrality gap at most (k + 1)/2 for k-uniform hypergraphs. The construction uses a result in extremal combinatorics. • We consider the standard semidefinite programming relaxation of the problem. We prove that the Lovász v-function provides an SDP relaxation with integrality gap at most (k + 1)/2. The proof gives an indirect way (not by a rounding algorithm) to bound the ratio between any local optimal solution and any optimal SDP solution. This shows a new connection between local search and linear and semidefinite programming relaxations.
Induced uncertain linguistic OWA operators applied to group decision making The ordered weighted averaging (OWA) operator was developed by Yager [IEEE Trans. Syst., Man, Cybernet. 18 (1998) 183]. Later, Yager and Filev [IEEE Trans. Syst., Man, Cybernet.--Part B 29 (1999) 141] introduced a more general class of OWA operators called the induced ordered weighted averaging (IOWA) operators, which take as their argument pairs, called OWA pairs, in which one component is used to induce an ordering over the second components which are exact numerical values and then aggregated. The aim of this paper is to develop some induced uncertain linguistic OWA (IULOWA) operators, in which the second components are uncertain linguistic variables. Some desirable properties of the IULOWA operators are studied, and then, the IULOWA operators are applied to group decision making with uncertain linguistic information.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.1
0.04
0.004878
0.000778
0
0
0
0
0
0
0
0
0
Numerical integration of oscillatory Airy integrals with singularities on an infinite interval. This work is devoted to the quadrature rules and asymptotic expansions for two classes of highly oscillatory Airy integrals on an infinite interval. We first derive two important asymptotic expansions in inverse powers of the frequency ω. Then, based on structure characteristics of the two asymptotic expansions in inverse powers of the frequency ω, both the so-called Filon-type method and the more efficient Clenshaw–Curtis–Filon-type method are introduced and analyzed. The required moments in the former can be explicitly expressed by the Meijer G-functions. The latter can be implemented in O(NlogN) operations, based on fast Fourier transform (FFT) and fast computation of the modified moments. Here, we can construct two useful recurrence relations for computing the required modified moments accurately, with the help of the Airy’s equation and some properties of the Chebyshev polynomials. Particularly, we also provide their error analyses in inverse powers of the frequency ω. Furthermore, the presented error analysis shows the advantageous property that the accuracy improves greatly as ω increases. Numerical examples are provided to illustrate the efficiency and accuracy of the proposed methods.
On quadrature of highly oscillatory integrals with logarithmic singularities In this paper a quadrature rule is discussed for highly oscillatory integrals with logarithmic singularities. At the same time, its error depends on the frequency ω and the computation of its moments are given. The new rule is implemented by interpolating f at Chebyshev nodes and singular point where the interpolation polynomial satisfies some conditions. Numerical experiments conform the efficiency for obtaining the approximations.
Uniform approximation to Cauchy principal value integrals with logarithmic singularity. An approximation of Clenshaw–Curtis type is given for Cauchy principal value integrals of logarithmically singular functions I(f;c)=−∫−11f(x) (log|x−c|)∕(x−c)dx (c∈(−1,1)) with a given function f. Using a polynomial pN of degree N interpolating f at the Chebyshev nodes we obtain an approximation I(pN;c)≅I(f;c). We expand pN in terms of Chebyshev polynomials with O(NlogN) computations by using the fast Fourier transform. Our method is efficient for smooth functions f, for which pN converges to f fast as N grows, and so simple to implement. This is achieved by exploiting three-term inhomogeneous recurrence relations in three stages to evaluate I(pN;c). For f(z) analytic on the interval [−1,1] in the complex plane z, the error of the approximation I(pN;c) is shown to be bounded uniformly. Using numerical examples we demonstrate the performance of the present method.
On the convergence rate of Clenshaw-Curtis quadrature for integrals with algebraic endpoint singularities. In this paper, we are concerned with Clenshaw–Curtis quadrature for integrals with algebraic endpoint singularities. An asymptotic error expansion and convergence rate are derived by combining a delicate analysis of the Chebyshev coefficients of functions with algebraic endpoint singularities and the aliasing formula of Chebyshev polynomials. Numerical examples are provided to confirm our analysis.
Filon-Clenshaw-Curtis rules for a class of highly-oscillatory integrals with logarithmic singularities. In this work we propose and analyse a numerical method for computing a family of highly oscillatory integrals with logarithmic singularities. For these quadrature rules we derive error estimates in terms of N, the number of nodes, k the rate of oscillations and a Sobolev-like regularity of the function. We prove that the method is not only robust but the error even decreases, for fixed N, as k increases. Practical issues about the implementation of the rule are also covered in this paper by: (a) writing down ready-to-implement algorithms; (b) analysing the numerical stability of the computations and (c) estimating the overall computational cost. We finish by showing some numerical experiments which illustrate the theoretical results presented in this paper.
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Singularity detection and processing with wavelets The mathematical characterization of singularities with Lipschitz exponents is reviewed. Theorems that estimate local Lipschitz exponents of functions from the evolution across scales of their wavelet transform are reviewed. It is then proven that the local maxima of the wavelet transform modulus detect the locations of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations has a particular behavior that is studied separately. The local frequency of such oscillations is measured from the wavelet transform modulus maxima. It has been shown numerically that one- and two-dimensional signals can be reconstructed, with a good approximation, from the local maxima of their wavelet transform modulus. As an application, an algorithm is developed that removes white noises from signals by analyzing the evolution of the wavelet transform maxima across scales. In two dimensions, the wavelet transform maxima indicate the location of edges in images.<>
Cubature Kalman Filters In this paper, we present a new nonlinear filter for high-dimensional state estimation, which we have named the cubature Kalman filter (CKF). The heart of the CKF is a spherical-radial cubature rule, which makes it possible to numerically compute multivariate moment integrals encountered in the nonlinear Bayesian filter. Specifically, we derive a third-degree spherical-radial cubature rule that provides a set of cubature points scaling linearly with the state-vector dimension. The CKF may therefore provide a systematic solution for high-dimensional nonlinear filtering problems. The paper also includes the derivation of a square-root version of the CKF for improved numerical stability. The CKF is tested experimentally in two nonlinear state estimation problems. In the first problem, the proposed cubature rule is used to compute the second-order statistics of a nonlinearly transformed Gaussian random variable. The second problem addresses the use of the CKF for tracking a maneuvering aircraft. The results of both experiments demonstrate the improved performance of the CKF over conventional nonlinear filters.
Numerical Integration using Sparse Grids We present and review algorithms for the numerical integration of multivariatefunctions defined over d--dimensional cubes using several variantsof the sparse grid method first introduced by Smolyak [51]. In this approach,multivariate quadrature formulas are constructed using combinationsof tensor products of suited one--dimensional formulas. The computingcost is almost independent of the dimension of the problem if thefunction under consideration has bounded mixed derivatives. We suggest...
Optimal design of a CMOS op-amp via geometric programming We describe a new method for determining component values and transistor dimensions for CMOS operational amplifiers (op-amps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result, the amplifier design problem can be expressed as a special form of optimization problem called geometric programming, for which very efficient global optimization methods have been developed. As a consequence we can efficiently determine globally optimal amplifier designs or globally optimal tradeoffs among competing performance measures such as power, open-loop gain, and bandwidth. Our method, therefore, yields completely automated sizing of (globally) optimal CMOS amplifiers, directly from specifications. In this paper, we apply this method to a specific widely used operational amplifier architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeoff curves relating performance measures such as power dissipation, unity-gain bandwidth, and open-loop gain. We show how the method can he used to size robust designs, i.e., designs guaranteed to meet the specifications for a variety of process conditions and parameters
Deblurring from highly incomplete measurements for remote sensing When we take photos, we often get blurred pictures because of hand shake, motion, insufficient light, unsuited focal length, or other disturbances. Recently, a compressed-sensing (CS) theorem which provides a new sampling theory for data acquisition has been applied for medical and astronomic imaging. The CS makes it possible to take superresolution photos using only one or a few pixels, rather th...
Real-Time Convex Optimization in Signal Processing This article shows the potential for convex optimization methods to be much more widely used in signal processing. In particular, automatic code generation makes it easier to create convex optimization solvers that are made much faster by being designed for a specific problem family. The disciplined convex programming framework that has been shown useful in transforming problems to a standard form...
Group decision-making model using fuzzy multiple attributes analysis for the evaluation of advanced manufacturing technology Selection of advanced manufacturing technology is important for improving manufacturing system competitiveness. This study builds a group decision-making model using fuzzy multiple attributes analysis to evaluate the suitability of manufacturing technology. Since numerous attributes have been considered in evaluating the manufacturing technology suitability, most information available in this stage is subjective and imprecise, and fuzzy sets theory provides a mathematical framework for modeling imprecision and vagueness. The proposed approach involved developing a fusion method of fuzzy information, which was assessed using both linguistic and numerical scales. In addition, an interactive decision analysis is developed to make a consistent decision. When evaluating the suitability of manufacturing technology, it may be necessary to improve upon the technology, and naturally advanced manufacturing technology is seen as the best direction for improvement. The flexible manufacturing system adopted in the Taiwanese bicycle industry is used in this study to illustrate the computational process of the proposed method. The results of this study are more objective and unbiased, owing to being generated by a group of decision-makers.
Soft computing based on interval valued fuzzy ANP-A novel methodology Analytic Network Process (ANP) is the multi-criteria decision making (MCDM) tool which takes into account such a complex relationship among parameters. In this paper, we develop the interval-valued fuzzy ANP (IVF-ANP) to solve MCDM problems since it allows interdependent influences specified in the model and generalizes on the supermatrix approach. Furthermore, performance rating values as well as the weights of criteria are linguistics terms which can be expressed in IVF numbers (IVFN). Moreover, we present a novel methodology proposed for solving MCDM problems. In proposed methodology by applying IVF-ANP method determined weights of criteria. Then, we appraise the performance of alternatives against criteria via linguistic variables which are expressed as triangular interval-valued fuzzy numbers. Afterward, by utilizing IVF-weights which are obtained from IVF-ANP and applying IVF-TOPSIS and IVF-VIKOR methods achieve final rank for alternatives. Additionally, to demonstrate the procedural implementation of the proposed model and its effectiveness, we apply it on a case study regarding to assessment the performance of property responsibility insurance companies.
1.1
0.1
0.1
0.05
0.02
0
0
0
0
0
0
0
0
0
On Combining Neuro-Fuzzy Architectures with the Rough Set Theory to Solve Classification Problems with Incomplete Data This paper presents a new approach to fuzzy classification in the case of missing features. The rough set theory is incorporated into neuro-fuzzy structures and the rough-neurofuzzy classifier is derived. The architecture of the classifier is determined by the MICOG (modified indexed center of gravity) defuzzification method. The structure of the classifier is presented in a general form which includes both the Mamdani approach and the logical approach - based on the genuine fuzzy aplications. A theorem, which allows to determine the structures of a roughneuro-fuzzy classifiers based on the MICOG defuzzification, is given and proven. Specific rough-neuro-fuzzy structures based on the Larsen rule, the Reichenbach and the Kleene-Dienes implications are given in details. In the experiments it is shown that the classifier with the Dubois-Prade fuzzy implication is characterized by the best performance in the case of missing features
An interval type-2 fuzzy logic system-based method for prediction interval construction. Graphical abstractDisplay Omitted HighlightsQuantification of uncertainties using prediction intervals.Interval type-2 fuzzy logic system-based prediction intervals.Training of interval type-2 fuzz...
Forecasting stock index price based on M-factors fuzzy time series and particle swarm optimization. In real time, one observation always relies on several observations. To improve the forecasting accuracy, all these observations can be incorporated in forecasting models. Therefore, in this study, we have intended to introduce a new Type-2 fuzzy time series model that can utilize more observations in forecasting. Later, this Type-2 model is enhanced by employing particle swarm optimization (PSO) technique. The main motive behind the utilization of the PSO with the Type-2 model is to adjust the lengths of intervals in the universe of discourse that are employed in forecasting, without increasing the number of intervals. The daily stock index price data set of SBI (State Bank of India) is used to evaluate the performance of the proposed model. The proposed model is also validated by forecasting the daily stock index price of Google. Our experimental results demonstrate the effectiveness and robustness of the proposed model in comparison with existing fuzzy time series models and conventional time series models.
Accuracy and complexity evaluation of defuzzification strategies for the discretised interval type-2 fuzzy set. The work reported in this paper addresses the challenge of the efficient and accurate defuzzification of discretised interval type-2 fuzzy sets. The exhaustive method of defuzzification for type-2 fuzzy sets is extremely slow, owing to its enormous computational complexity. Several approximate methods have been devised in response to this bottleneck. In this paper we survey four alternative strategies for defuzzifying an interval type-2 fuzzy set: (1) The Karnik-Mendel Iterative Procedure, (2) the Wu-Mendel Approximation, (3) the Greenfield-Chiclana Collapsing Defuzzifier, and (4) the Nie-Tan Method.We evaluated the different methods experimentally for accuracy, by means of a comparative study using six representative test sets with varied characteristics, using the exhaustive method as the standard. A preliminary ranking of the methods was achieved using a multicriteria decision making methodology based on the assignment of weights according to performance. The ranking produced, in order of decreasing accuracy, is (1) the Collapsing Defuzzifier, (2) the Nie-Tan Method, (3) the Karnik-Mendel Iterative Procedure, and (4) the Wu-Mendel Approximation.Following that, a more rigorous analysis was undertaken by means of the Wilcoxon Nonparametric Test, in order to validate the preliminary test conclusions. It was found that there was no evidence of a significant difference between the accuracy of the collapsing and Nie-Tan Methods, and between that of the Karnik-Mendel Iterative Procedure and the Wu-Mendel Approximation. However, there was evidence to suggest that the collapsing and Nie-Tan Methods are more accurate than the Karnik-Mendel Iterative Procedure and the Wu-Mendel Approximation.In relation to efficiency, each method's computational complexity was analysed, resulting in a ranking (from least computationally complex to most computationally complex) as follows: (1) the Nie-Tan Method, (2) the Karnik-Mendel Iterative Procedure (lowest complexity possible), (3) the Greenfield-Chiclana Collapsing Defuzzifier, (4) the Karnik-Mendel Iterative Procedure (highest complexity possible), and (5) the Wu-Mendel Approximation. (C) 2013 Elsevier Inc. All rights reserved.
Multi-attribute group decision making models under interval type-2 fuzzy environment Interval type-2 fuzzy sets (IT2 FSs) are a very useful means to depict the decision information in the process of decision making. In this article, we investigate the group decision making problems in which all the information provided by the decision makers (DMs) is expressed as IT2 fuzzy decision matrices, and the information about attribute weights is partially known, which may be constructed by various forms. We first use the IT2 fuzzy weighted arithmetic averaging operator to aggregate all individual IT2 fuzzy decision matrices provided by the DMs into the collective IT2 fuzzy decision matrix, then we utilize the ranking-value measure to calculate the ranking value of each attribute value and construct the ranking-value matrix of the collective IT2 fuzzy decision matrix. Based on the ranking-value matrix and the given attribute weight information, we establish some optimization models to determine the weights of attributes. Furthermore, we utilize the obtained attribute weights and the IT2 fuzzy weighted arithmetic average operator to fuse the IT2 fuzzy information in the collective IT2 fuzzy decision matrix to get the overall IT2 fuzzy values of alternatives by which the ranking of all the given alternatives can be found. Finally, we give an illustrative example.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
Sets with type-2 operations The algebra of truth values of type-2 fuzzy sets consists of all mappings of the unit interval to itself, with type-2 operations that are convolutions of ordinary max and min operations. This paper is concerned with a special subalgebra of this truth value algebra, namely the set of nonzero functions with values in the two-element set {0,1}. This algebra can be identified with the set of all non-empty subsets of the unit interval, but the operations are not the usual union and intersection. We give simplified descriptions of the operations and derive the basic algebraic properties of this algebra, including the identification of its automorphism group. We also discuss some subalgebras and homomorphisms between them and look briefly at t-norms on this algebra of sets.
Fuzzy logic systems for engineering: a tutorial A fuzzy logic system (FLS) is unique in that it is able to simultaneously handle numerical data and linguistic knowledge. It is a nonlinear mapping of an input data (feature) vector into a scalar output, i.e., it maps numbers into numbers. Fuzzy set theory and fuzzy logic establish the specifics of the nonlinear mapping. This tutorial paper provides a guided tour through those aspects of fuzzy sets and fuzzy logic that are necessary to synthesize an FLS. It does this by starting with crisp set theory and dual logic and demonstrating how both can be extended to their fuzzy counterparts. Because engineering systems are, for the most part, causal, we impose causality as a constraint on the development of the FLS. After synthesizing a FLS, we demonstrate that it can be expressed mathematically as a linear combination of fuzzy basis functions, and is a nonlinear universal function approximator, a property that it shares with feedforward neural networks. The fuzzy basis function expansion is very powerful because its basis functions can be derived from either numerical data or linguistic knowledge, both of which can be cast into the forms of IF-THEN rules
Galerkin Finite Element Approximations of Stochastic Elliptic Partial Differential Equations We describe and analyze two numerical methods for a linear elliptic problem with stochastic coefficients and homogeneous Dirichlet boundary conditions. Here the aim of the computations is to approximate statistical moments of the solution, and, in particular, we give a priori error estimates for the computation of the expected value of the solution. The first method generates independent identically distributed approximations of the solution by sampling the coefficients of the equation and using a standard Galerkin finite element variational formulation. The Monte Carlo method then uses these approximations to compute corresponding sample averages. The second method is based on a finite dimensional approximation of the stochastic coefficients, turning the original stochastic problem into a deterministic parametric elliptic problem. A Galerkin finite element method, of either the h- or p-version, then approximates the corresponding deterministic solution, yielding approximations of the desired statistics. We present a priori error estimates and include a comparison of the computational work required by each numerical approximation to achieve a given accuracy. This comparison suggests intuitive conditions for an optimal selection of the numerical approximation.
Optimal design of a CMOS op-amp via geometric programming We describe a new method for determining component values and transistor dimensions for CMOS operational amplifiers (op-amps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result, the amplifier design problem can be expressed as a special form of optimization problem called geometric programming, for which very efficient global optimization methods have been developed. As a consequence we can efficiently determine globally optimal amplifier designs or globally optimal tradeoffs among competing performance measures such as power, open-loop gain, and bandwidth. Our method, therefore, yields completely automated sizing of (globally) optimal CMOS amplifiers, directly from specifications. In this paper, we apply this method to a specific widely used operational amplifier architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeoff curves relating performance measures such as power dissipation, unity-gain bandwidth, and open-loop gain. We show how the method can he used to size robust designs, i.e., designs guaranteed to meet the specifications for a variety of process conditions and parameters
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
User impatience and network performance In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.1
0.1
0.033333
0.011111
0.003333
0.000075
0
0
0
0
0
0
0
0
Improving adaptive generalized polynomial chaos method to solve nonlinear random differential equations by the random variable transformation technique •gPC and Random Variable Transformation methods are combined.•Nonlinear random differential equations (RDEs) are solved.•Nonlinear uncertainties are considered as inputs in nonlinear RDEs.•Mean and variance of the solution stochastic process are computed.•The method works for high oscillatory systems.
Some recommendations for applying gPC (generalized polynomial chaos) to modeling: An analysis through the Airy random differential equation In this paper we study the use of the generalized polynomial chaos method to differential equations describing a model that depends on more than one random input. This random input can be in the form of parameters or of initial or boundary conditions. We investigate the effect of the choice of the probability density functions for the inputs on the output stochastic processes. The study is performed on the Airy's differential equation. This equation is a good test case since its solutions are highly oscillatory and errors can develop both in the amplitude and the phase. Several different situations are considered and, finally, conclusions are presented.
Uncertainty quantification in simulations of epidemics using polynomial chaos. Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model.
Polynomial chaos expansion for sensitivity analysis In this paper, the computation of Sobol's sensitivity indices from the polynomial chaos expansion of a model output involving uncertain inputs is investigated. It is shown that when the model output is smooth with regards to the inputs, a spectral convergence of the computed sensitivity indices is achieved. However, even for smooth outputs the method is limited to a moderate number of inputs, say 10–20, as it becomes computationally too demanding to reach the convergence domain. Alternative methods (such as sampling strategies) are then more attractive. The method is also challenged when the output is non-smooth even when the number of inputs is limited.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Fuzzy set methods for qualitative and natural language oriented simulation The author discusses the approach of using fuzzy set theory to create a formal way of viewing the qualitative simulation of models whose states, inputs, outputs, and parameters are uncertain. Simulation was performed using detailed and accurate models, and it was shown how input and output trajectories could reflect linguistic (or qualitative) changes in a system. Uncertain variables are encoded using triangular fuzzy numbers, and three distinct fuzzy simulation approaches (Monte Carlo, correlated and uncorrelated) are defined. The methods discussed are also valid for discrete event simulation; experiments have been performed on the fuzzy simulation of a single server queuing model. In addition, an existing C-based simulation toolkit, SimPack, was augmented to include the capabilities for modeling using fuzzy arithmetic and linguistic association, and a C++ class definition was coded for fuzzy number types
A training algorithm for optimal margin classifiers A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
A review on spectrum sensing for cognitive radio: challenges and solutions Cognitive radio is widely expected to be the next Big Bang in wireless communications. Spectrum sensing, that is, detecting the presence of the primary users in a licensed spectrum, is a fundamental problem for cognitive radio. As a result, spectrum sensing has reborn as a very active research area in recent years despite its long history. In this paper, spectrum sensing techniques from the optimal likelihood ratio test to energy detection, matched filtering detection, cyclostationary detection, eigenvalue-based sensing, joint space-time sensing, and robust sensing methods are reviewed. Cooperative spectrum sensing with multiple receivers is also discussed. Special attention is paid to sensing methods that need little prior information on the source signal and the propagation channel. Practical challenges such as noise power uncertainty are discussed and possible solutions are provided. Theoretical analysis on the test statistic distribution and threshold setting is also investigated.
Sensor Selection via Convex Optimization We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the (m k) possible choices of sensor measurements is not practical unless m and k are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m 3 operations; for m= 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer.
Random Alpha Pagerank We suggest a revision to the PageRank random surfer model that considers the influence of a population of random surfers on the PageRank vector. In the revised model, each member of the population has its own teleportation parameter chosen from a probability distribution, and consequently, the ranking vector is random. We propose three algorithms for computing the statistics of the random ranking vector based respectively on (i) random sampling, (ii) paths along the links of the underlying graph, and (iii) quadrature formulas. We find that the expectation of the random ranking vector produces similar rankings to its deterministic analogue, but the standard deviation gives uncorrelated information (under a Kendall-tau metric) with myriad potential uses. We examine applications of this model to web spam.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.2
0.2
0.2
0.006897
0
0
0
0
0
0
0
0
0
0
A model for asynchronous reactive systems and its application to secure message transmission We present a rigorous model for secure reactive systems in asynchronous networks with a sound cryptographic semantics, supporting abstract specifications and the composition of secure systems. This enables modular proofs of security, which is essential in bridging the gap between the rigorous proof techniques of cryptography and tool-supported formal proof techniques. The model follows the general simulatability approach of modern cryptography. A variety of network structures and trust models can be described such as static and adaptive adversaries, some examples of this are given. As an example of our specification methodology we provide an abstract and complete specification for Secure Message Transmission, improving on recent results by Lynch (1999), and verify one concrete implementation. Our proof is based on a general theorem on the security of encryption in a reactive multi-user setting, generalizing a recent result by Bellare et. al (2000)
Universally composable security: a new paradigm for cryptographic protocols We propose a novel paradigm for defining security of cryptographic protocols, called universally composable security. The salient property of universally composable definitions of security is that they guarantee security even when a secure protocol is composed of an arbitrary set of protocols, or more generally when the protocol is used as a component of an arbitrary system. This is an essential property for maintaining security of cryptographic protocols in complex and unpredictable environments such as the Internet. In particular, universally composable definitions guarantee security even when an unbounded number of protocol instances are executed concurrently in an adversarially controlled manner, they guarantee non-malleability with respect to arbitrary protocols, and more. We show how to formulate universally composable definitions of security for practically any cryptographic task. Furthermore, we demonstrate that practically any such definition can be realized using known techniques, as long as only a minority of the participants are corrupted. We then proceed to formulate universally composable definitions of a wide array of cryptographic tasks, including authenticated and secure communication, key-exchange, public-key encryption, signature, commitment, oblivious transfer, zero knowledge and more. We also make initial steps towards studying the realizability of the proposed definitions in various settings.
Deriving Cryptographically Sound Implementations Using Composition and Formally Verified Bisimulation We consider abstract specifications of cryptographic protocols which are both suitable for formal verification and maintain a sound cryptographic semantics. In this paper, we present the first abstract specification for ordered secure message transmission in reactive systems based on the recently published model of Pfitzmann and Waidner. We use their composition theorem to derive a possible implementation whose correctness additionally involves a classical bisimulation, which we formally verify using the theorem prover PVS. The example serves as the first important case study which shows that this approach is applicable in practice, and it is the first example that combines tool-supported formal proof techniques with the rigorous proofs of cryptography.
Clock synchronization with faults and recoveries (extended abstract) We present a convergence-function based clock synchronization algorithm, which is simple, efficient and fault-tolerant. The algorithm is tolerant of failures and allows recoveries, as long as less than a third of the processors are faulty 'at the same time'. Arbitrary (Byzantine) faults are tolerated, without requiring awareness of failure or recovery. In contrast, previous clock synchronization algorithms limited the total number of faults throughout the execution, which is not realistic, or assumed fault detection.The use of our algorithm ensures secure and reliable time services, a requirement of many distributed systems and algorithms. In particular, secure time is a fundamental assumption of proactive secure mechanisms, which are also designed to allow recovery from (arbitrary) faults. Therefore, our work is crucial to realize these mechanisms securely.
Proactive RSA. Distributed threshold protocols that incorporate proactive maintenance can tolerate a very strong “mobile adversary.” This adversary may corrupt all participants throughout the lifetime of the system in a non-monotonic fashion (i.e., recoveries are possible) but the adversary is limited to the number of participants it can corrupt during any short time period. The proactive maintenance assures increased security and availability of the cryptographic primitive. We present a proactive RSA system in which a threshold of servers applies the RSA signature (or decryption) function in a distributed manner. Our protocol enables servers which hold the RSA key distributively to dynamically and cooperatively self-update; it is secure even when a linear number of the servers are corrupted during any time period; it efficiently maintains the security of the function; and it enables continuous function availability (correct efficient function application using the shared key is possible at any time). A major technical difficulty in “proactivizing” RSA was the fact that the servers have to update the “distributed representation” of an RSA key, while not learning the order of the group from which keys are drawn (in order not to compromise the RSA security). We give a distributed threshold RSA method which permits “proactivization”.
An algorithm for pronominal anaphora resolution This paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals). The algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state. Like the parser, the algorithm is implemented in Prolog. The authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences. The algorithm successfully identifies the antecedent of the pronoun for 86% of these pronoun occurrences. The relative contributions of the algorithm's components to its overall success rate in this blind test are examined. Experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and real-world relations to the algorithm's decision procedure. Interestingly, this enhancement only marginally improves the algorithm's performance (by 2%). The algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature. In particular, the search procedure of Hobbs' algorithm was implemented in the Slot Grammar framework and applied to the sentences in teh blind test set. The authors' algorithm achieves a higher rate of success (4%) than Hobbs' algorithm. The relation of the algorithm to the centering approach is discussed, as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
MapReduce: simplified data processing on large clusters MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Analysis of the domain mapping method for elliptic diffusion problems on random domains. In this article, we provide a rigorous analysis of the solution to elliptic diffusion problems on random domains. In particular, based on the decay of the Karhunen-Loève expansion of the domain perturbation field, we establish decay rates for the derivatives of the random solution that are independent of the stochastic dimension. For the implementation of a related approximation scheme, like quasi-Monte Carlo quadrature, stochastic collocation, etc., we propose parametric finite elements to compute the solution of the diffusion problem on each individual realization of the domain generated by the perturbation field. This simplifies the implementation and yields a non-intrusive approach. Having this machinery at hand, we can easily transfer it to stochastic interface problems. The theoretical findings are complemented by numerical examples for both, stochastic interface problems and boundary value problems on random domains.
Sensor Selection via Convex Optimization We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the (m k) possible choices of sensor measurements is not practical unless m and k are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m 3 operations; for m= 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer.
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the $k$ dominant components of the singular value decomposition of an $m \times n$ matrix. (i) For a dense input matrix, randomized algorithms require $\bigO(mn \log(k))$ floating-point operations (flops) in contrast to $ \bigO(mnk)$ for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to $\bigO(k)$ passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
Using trapezoids for representing granular objects: Applications to learning and OWA aggregation We discuss the role and benefits of using trapezoidal representations of granular information. We focus on the use of level sets as a tool for implementing many operations on trapezoidal sets. We point out the simplification that the linearity of the trapezoid brings by requiring us to perform operations on only two level sets. We investigate the classic learning algorithm in the case when our observations are granule objects represented as trapezoidal fuzzy sets. An important issue that arises is the adverse effect that very uncertain observations have on the quality of our estimates. We suggest an approach to addressing this problem using the specificity of the observations to control its effect. We next consider the OWA aggregation of information represented as trapezoids. An important problem that arises here is the ordering of the trapezoidal fuzzy sets needed for the OWA aggregation. We consider three approaches to accomplish this ordering based on the location, specificity and fuzziness of the trapezoids. From these three different approaches three fundamental methods of ordering are developed. One based on the mean of the 0.5 level sets, another based on the length of the 0.5 level sets and a third based on the difference in lengths of the core and support level sets. Throughout this work particular emphasis is placed on the simplicity of working with trapezoids while still retaining a rich representational capability.
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.07653
0.0683
0.0683
0.03415
0.013988
0
0
0
0
0
0
0
0
0
Computing generalized belief functions for continuous fuzzy sets Intelligent systems often need to deal with various kinds of uncertain information. It is thus essential to develop evidential reasoning models that (1) can cope with different kinds of uncertain information in a theoretically sound manner and (2) can be implemented efficiently in a computer system. Generalizing the Dempster-Shafer theory to fuzzy sets has been suggested as a promising approach for dealing with probabilistic data, vague concepts, and incomplete information in a uniform framework. However, previous efforts in this area do not preserve an important principle of D-S theory—that belief and plausibility measures are the lower and upper bounds on belief measures. Recently, Yen proposed an alternative approach in which the degree of belief and the degree of plausibility of a fuzzy set are interpreted as its lower and upper belief measure, respectively. This paper briefly describes his generalized D-S reasoning model and discusses the computational aspects of the model. In particular, efficient algorithms are presented for computing the generalized belief function and plausibility functions for strong convex fuzzy sets, which are a wide class of fuzzy sets used most frequently in existing fuzzy intelligent systems. The algorithm not only facilitates the application of the generalized D-S model but also provides the basis for developing efficient algorithms for more general cases.
Generalizing the Dempster-Schafer theory to fuzzy sets With the desire to manage imprecise and vague information in evidential reasoning, several attempts have been made to generalize the Dempster–Shafer (D–S) theory to deal with fuzzy sets. However, the important principle of the D–S theory, that the belief and plausibility functions are treated as lower and upper probabilities, is no longer preserved in these generalizations. A generalization of the D–S theory in which this principle is maintained is described. It is shown that computing the degree of belief in a hypothesis in the D–S theory can be formulated as an optimization problem. The extended belief function is thus obtained by generalizing the objective function and the constraints of the optimization problem. To combine bodies of evidence that may contain vague information, Dempster’s rule is extended by 1) combining generalized compatibility relations based on the possibility theory, and 2) normalizing combination results to account for partially conflicting evidence. Our generalization not only extends the application of the D–S theory but also illustrates a way that probability theory and fuzzy set theory can be integrated in a sound manner in order to deal with different kinds of uncertain information in intelligent systems
The concept of a linguistic variable and its application to approximate reasoning—I By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. For example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e.,young, not young, very young, quite young, old, not very old and not very young, etc., rather than 20, 21,22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (L>, T(L), U,G,M) in which L is the name of the variable; T(L) is the term-set of L, that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(L); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U. The meaning of a linguistic value X is characterized by a compatibility function, c: U → [0,1], which associates with each u in U its compatibility with X. Thus, the compatibility of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compatibilities of the so-called primary terms in a composite linguistic value-e.g., young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectives and and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The concept of a linguistic variable provides a means of approximate characterization of phenomena which are too complex or too ill-defined to be amenable to description in conventional quantitative terms. In particular, treating Truth as a linguistic variable with values such as true, very true, completely true, not very true, untrue, etc., leads to what is called fuzzy logic. By providing a basis for approximate reasoning, that is, a mode of reasoning which is not exact nor very inexact, such logic may offer a more realistic framework for human reasoning than the traditional two-valued logic. It is shown that probabilities, too, can be treated as linguistic variables with values such as likely, very likely, unlikely, etc. Computation with linguistic probabilities requires the solution of nonlinear programs and leads to results which are imprecise to the same degree as the underlying probabilities. The main applications of the linguistic approach lie in the realm of humanistic systems-especially in the fields of artificial intelligence, linguistics, human decision processes, pattern recognition, psychology, law, medical diagnosis, information retrieval, economics and related areas.
An Approach to Inference in Approximate Reasoning
The Vienna Definition Language
Fuzzy Logic and the Resolution Principle The relationship between fuzzy logic and two-valued logic in the context of the first order predicate calctflus is discussed. It is proved that if every clause in a set of clauses is somethblg more than a "half-truth" and the most reliable clause has truth-value a and the most unreliable clause has truth-value b, then we are guaranteed that all the logical con- sequences obtained by repeatedly applying the resolution principle will have truth-value between a and b. The significance of this theorem is also discussed.
Fundamentals Of Clinical Methodology: 2. Etiology The concept of etiology is analyzed and the possibilities and limitations of deterministic, probabilistic, and fuzzy etiology are explored. Different kinds of formal structures for the relation of causation are introduced which enable us to explicate the notion of cause on qualitative, comparative, and quantitative levels. The conceptual framework developed is an approach to a theory of causality that may be useful in etiologic research, in building nosological systems, and in differential diagnosis, therapeutic decision-making, and controlled clinical trials. The bearings of the theory are exemplified by examining the current Chlamydia pneumoniae hypothesis on the incidence of myocardial infarction. (C) 1998 Elsevier Science B.V. All rights reserved.
Defuzzification of the discretised generalised type-2 fuzzy set: Experimental evaluation. The work reported in this paper addresses the challenge of the efficient and accurate defuzzification of discretised generalised type-2 fuzzy sets as created by the inference stage of a Mamdani Fuzzy Inferencing System. The exhaustive method of defuzzification for type-2 fuzzy sets is extremely slow, owing to its enormous computational complexity. Several approximate methods have been devised in response to this defuzzification bottleneck. In this paper we begin by surveying the main alternative strategies for defuzzifying a generalised type-2 fuzzy set: (1) Vertical Slice Centroid Type-Reduction; (2) the sampling method; (3) the elite sampling method; and (4) the α-planes method. We then evaluate the different methods experimentally for accuracy and efficiency. For accuracy the exhaustive method is used as the standard. The test results are analysed statistically by means of the Wilcoxon Nonparametric Test and the elite sampling method shown to be the most accurate. In regards to efficiency, Vertical Slice Centroid Type-Reduction is demonstrated to be the fastest technique.
Wireless Sensor Network Lifetime Analysis Using Interval Type-2 Fuzzy Logic Systems Extending the lifetime of the energy constrained wireless sensor networks is a crucial challenge in sensor network research. In this paper, we present a novel approach based on fuzzy logic systems to analyze the lifetime of a wireless sensor network. We demonstrate that a type-2 fuzzy membership function (MF), i.e., a Gaussian MF with uncertain standard deviation (std) is most appropriate to model a single node lifetime in wireless sensor networks. In our research, we study two basic sensor placement schemes: square-grid and hex-grid. Two fuzzy logic systems (FLSs): a singleton type-1 FLS and an interval type-2 FLS are designed to perform lifetime estimation of the sensor network. We compare our fuzzy approach with other nonfuzzy schemes in previous papers. Simulation results show that FLS offers a feasible method to analyze and estimate the sensor network lifetime and the interval type-2 FLS in which the antecedent and the consequent membership functions are modeled as Gaussian with uncertain std outperforms the singleton type-1 FLS and the nonfuzzy schemes.
Generalized theory of uncertainty (GTU)-principal concepts and ideas Uncertainty is an attribute of information. The path-breaking work of Shannon has led to a universal acceptance of the thesis that information is statistical in nature. Concomitantly, existing theories of uncertainty are based on probability theory. The generalized theory of uncertainty (GTU) departs from existing theories in essential ways. First, the thesis that information is statistical in nature is replaced by a much more general thesis that information is a generalized constraint, with statistical uncertainty being a special, albeit important case. Equating information to a generalized constraint is the fundamental thesis of GTU. Second, bivalence is abandoned throughout GTU, and the foundation of GTU is shifted from bivalent logic to fuzzy logic. As a consequence, in GTU everything is or is allowed to be a matter of degree or, equivalently, fuzzy. Concomitantly, all variables are, or are allowed to be granular, with a granule being a clump of values drawn together by a generalized constraint. And third, one of the principal objectives of GTU is achievement of NL-capability, that is, the capability to operate on information described in natural language. NL-capability has high importance because much of human knowledge, including knowledge about probabilities, is described in natural language. NL-capability is the focus of attention in the present paper. The centerpiece of GTU is the concept of a generalized constraint. The concept of a generalized constraint is motivated by the fact that most real-world constraints are elastic rather than rigid, and have a complex structure even when simple in appearance. The paper concludes with examples of computation with uncertain information described in natural language.
Nonparametric multivariate density estimation: a comparative study The paper algorithmically and empirically studies two major types of nonparametric multivariate density estimation techniques, where no assumption is made about the data being drawn from any of known parametric families of distribution. The first type is the popular kernel method (and several of its variants) which uses locally tuned radial basis (e.g., Gaussian) functions to interpolate the multidimensional density; the second type is based on an exploratory projection pursuit technique which interprets the multidimensional density through the construction of several 1D densities along highly “interesting” projections of multidimensional data. Performance evaluations using training data from mixture Gaussian and mixture Cauchy densities are presented. The results show that the curse of dimensionality and the sensitivity of control parameters have a much more adverse impact on the kernel density estimators than on the projection pursuit density estimators
Uncertainty Relations and Sparse Signal Recovery for Pairs of General Signal Sets We present an uncertainty relation for the representation of signals in two different general (possibly redundant or incomplete) signal sets. This uncertainty relation is relevant for the analysis of signals containing two distinct features each of which can be described sparsely in a suitable general signal set. Furthermore, the new uncertainty relation is shown to lead to improved sparsity thresholds for recovery of signals that are sparse in general dictionaries. Specifically, our results improve on the well-known $(1+1/d)/2$ -threshold for dictionaries with coherence $d$ by up to a factor of two. Furthermore, we provide probabilistic recovery guarantees for pairs of general dictionaries that also allow us to understand which parts of a general dictionary one needs to randomize over to “weed out” the sparsity patterns that prohibit breaking the square-root bottleneck.
Solving PDEs with Intrepid Intrepid is a Trilinos package for advanced discretizations of Partial Differential Equations PDEs. The package provides a comprehensive set of tools for local, cell-based construction of a wide range of numerical methods for PDEs. This paper describes the mathematical ideas and software design principles incorporated in the package. We also provide representative examples showcasing the use of Intrepid both in the context of numerical PDEs and the more general context of data analysis.
Soft computing based on interval valued fuzzy ANP-A novel methodology Analytic Network Process (ANP) is the multi-criteria decision making (MCDM) tool which takes into account such a complex relationship among parameters. In this paper, we develop the interval-valued fuzzy ANP (IVF-ANP) to solve MCDM problems since it allows interdependent influences specified in the model and generalizes on the supermatrix approach. Furthermore, performance rating values as well as the weights of criteria are linguistics terms which can be expressed in IVF numbers (IVFN). Moreover, we present a novel methodology proposed for solving MCDM problems. In proposed methodology by applying IVF-ANP method determined weights of criteria. Then, we appraise the performance of alternatives against criteria via linguistic variables which are expressed as triangular interval-valued fuzzy numbers. Afterward, by utilizing IVF-weights which are obtained from IVF-ANP and applying IVF-TOPSIS and IVF-VIKOR methods achieve final rank for alternatives. Additionally, to demonstrate the procedural implementation of the proposed model and its effectiveness, we apply it on a case study regarding to assessment the performance of property responsibility insurance companies.
1.072956
0.009785
0.001146
0.000417
0.000014
0.000007
0
0
0
0
0
0
0
0
A linguistic decision support model for QoS priorities in networking Networking resources and technologies are mission-critical in organizations, companies, universities, etc. Their relevance implies the necessity of including tools for Quality of Service (QoS) that assure the performance of such critical services. To address this problem and guarantee a sufficient bandwidth transmission for critical applications/services, different strategies and QoS tools based on the administrator's knowledge may be used. However it is common that network administrators might have a nonrealistic view about the needs of users and organizations. Consequently it seems convenient to take into account such users' necessities for traffic prioritization even though they could involve uncertainty and subjectivity. This paper proposes a linguistic decision support model for traffic prioritization in networking, which uses a group decision making process that gathers user's needs in order to improve organizational productivity. This model manages the inherent uncertainty, imprecision and vagueness of users' necessities, modeling the information by means of linguistic information and offering a flexible framework that provides multiple linguistic scales to the experts, according to their degree of knowledge. Thereby, this decision support model will consist of two processes: (i) A linguistic decision analysis process that evaluates and assesses priorities for QoS of the network services according to users and organizations' necessities. (ii) A priority assignment process that sets up the network traffic in agreement with the previous values.
A hierarchical model of a linguistic variable In this work a theoretical hierarchical model of dichotomous linguistic variables is presented. The model incorporates certain features of the approximate reasoning approach and others of the Fuzzy Control approach to Fuzzy Linguistic Variables. It allows sharing of the same hierarchical structure between the syntactic definition of a linguistic variable and its semantic definition given by fuzzy sets. This fact makes it easier to build symbolic operations between linguistic terms with a better grounded semantic interpretation. Moreover, the family of fuzzy sets which gives the semantics of each linguistic term constitutes a multiresolution system, and thanks to that any fuzzy set can be represented in terms of the set of linguistic terms. The model can also be considered a general framework for building more interpretable fuzzy linguistic variables with a high capacity of accuracy, which could be used to build more interpretable Fuzzy Rule Based Systems (FRBS).
A hybrid recommender system for the selective dissemination of research resources in a Technology Transfer Office Recommender systems could be used to help users in their access processes to relevant information. Hybrid recommender systems represent a promising solution for multiple applications. In this paper we propose a hybrid fuzzy linguistic recommender system to help the Technology Transfer Office staff in the dissemination of research resources interesting for the users. The system recommends users both specialized and complementary research resources and additionally, it discovers potential collaboration possibilities in order to form multidisciplinary working groups. Thus, this system becomes an application that can be used to help the Technology Transfer Office staff to selectively disseminate the research knowledge and to increase its information discovering properties and personalization capacities in an academic environment.
A Knowledge Based Recommender System with Multigranular Linguistic Information. Recommender systems are applications that have emerged in the e-commerce area in order to assist users in their searches in elctronic shops. These shops usu-ally offer a wide range of items to satisfy the neccessi-ties of a great variety of users. Nevertheless, search-ing in such a wide range of items could be a very dif-ficult and tedious task. Recommender Systems assist users to find items by means of recommendations based on information provided from different sources such as: other users, experts, etc. Most of the recommender sys-tems force users to provide their preferences or neces-sities using an unique numerical scale of information fixed in advance. Normally, this information is usually related to opinions, tastes and perceptions, and there- fore, it means that it is usually better expressed in a qualitative way, with linguistics terms, than in a quan-titative way, with precise numbers. In this contribution, we propose a Knowledge Based Recommender System that uses the fuzzy linguistic approach to define a flexi-ble framework that captures the uncertainty of the user's preferences. Thus, this framework will allow users to express their necessities in a different scale, closer to their knowledge, from the scale used to describe the items.
Incorporating filtering techniques in a fuzzy linguistic multi-agent model for information gathering on the web In (Computing with Words, Wiley, New York, 2001, p. 251; Soft Comput. 6 (2002) 320; Fuzzy Logic and The Internet, Physica-Verlag, Springer, Wurzburg, Berlin, 2003) we presented different fuzzy linguistic multi-agent models for helping users in their information gathering processes on the Web. In this paper we describe a new fuzzy linguistic multi-agent model that incorporates two information filtering techniques in its structure: a content-based filtering agent and a collaborative filtering agent. Both elements are introduced to increase the information filtering possibilities of multi-agent system on the Web and, in such a way, to improve its retrieval issues.
A model based on linguistic 2-tuples for dealing with multigranular hierarchical linguistic contexts in multi-expert decision-making In those problems that deal with multiple sources of linguistic information we can find problems defined in contexts where the linguistic assessments are assessed in linguistic term sets with different granularity of uncertainty and/or semantics (multigranular linguistic contexts). Different approaches have been developed to manage this type of contexts, that unify the multigranular linguistic information in an unique linguistic term set for an easy management of the information. This normalization process can produce a loss of information and hence a lack of precision in the final results. In this paper, we shall present a type of multigranular linguistic contexts we shall call linguistic hierarchies term sets, such that, when we deal with multigranular linguistic information assessed in these structures we can unify the information assessed in them without loss of information. To do so, we shall use the 2-tuple linguistic representation model. Afterwards we shall develop a linguistic decision model dealing with multigranular linguistic contexts and apply it to a multi-expert decision-making problem
Some two-dimensional uncertain linguistic Heronian mean operators and their application in multiple-attribute decision making Heronian mean (HM) is an important aggregation operator which has the characteristic of capturing the correlations of the aggregated arguments. In this paper, we first analyze the shortcomings of the existing weighted HM operators which do not feature reducibility and idempotency, and then, we propose the new weighted generalized Heronian mean operator and weighted generalized geometric Heronian mean operator, and prove that they can satisfy some desirable properties, such as reducibility, idempotency, monotonicity, and boundedness, and discuss some special cases of these operators. Further, because two-dimensional uncertain linguistic information can easily express the fuzzy information, we propose two-dimensional uncertain linguistic weighted generalized Heronian mean (2DULWGHM) operator and the two-dimensional uncertain linguistic weighted generalized geometric Heronian mean (2DULWGGHM) operator, and some desirable properties and special cases of 2DULWGHM and 2DULWGGHM operators are discussed. Moreover, for multiple-attribute decision-making problems in which attribute values take the form of two-dimensional uncertain linguistic variables, some approaches based on the developed operators are proposed. Finally, we gave an illustrative example to explain the steps of the developed methods and to discuss the influence of different parameters on the decision-making results.
A Consensus Support System Model for Group Decision-Making Problems With Multigranular Linguistic Preference Relations The group decision-making framework with linguistic preference relations is studied. In this context, we assume that there exist several experts who may have different background and knowledge to solve a particular problem and, therefore, different linguistic term sets (multigranular linguistic information) could be used to express their opinions. The aim of this paper is to present a model of consensus support system to assist the experts in all phases of the consensus reaching process of group decision-making problems with multigranular linguistic preference relations. This consensus support system model is based on i) a multigranular linguistic methodology, ii) two consensus criteria, consensus degrees and proximity measures, and iii) a guidance advice system. The multigranular linguistic methodology permits the unification of the different linguistic domains to facilitate the calculus of consensus degrees and proximity measures on the basis of experts' opinions. The consensus degrees assess the agreement amongst all the experts' opinions, while the proximity measures are used to find out how far the individual opinions are from the group opinion. The guidance advice system integrated in the consensus support system model acts as a feedback mechanism, and it is based on a set of advice rules to help the experts change their opinions and to find out which direction that change should follow in order to obtain the highest degree of consensus possible. There are two main advantages provided by this model of consensus support system. Firstly, its ability to cope with group decision-making problems with multigranular linguistic preference relations, and, secondly, the figure of the moderator, traditionally presents in the consensus reaching process, is replaced by the guidance advice system, and in such a way, the whole group decision-making process is automated
Uncertainty measures for interval type-2 fuzzy sets Fuzziness (entropy) is a commonly used measure of uncertainty for type-1 fuzzy sets. For interval type-2 fuzzy sets (IT2 FSs), centroid, cardinality, fuzziness, variance and skewness are all measures of uncertainties. The centroid of an IT2 FS has been defined by Karnik and Mendel. In this paper, the other four concepts are defined. All definitions use a Representation Theorem for IT2 FSs. Formulas for computing the cardinality, fuzziness, variance and skewness of an IT2 FS are derived. These definitions should be useful in IT2 fuzzy logic systems design using the principles of uncertainty, and in measuring the similarity between two IT2 FSs.
Multicriteria decision making in energy planning using a modified fuzzy TOPSIS methodology Energy planning is a complex issue which takes technical, economic, environmental and social attributes into account. Selection of the best energy technology requires the consideration of conflicting quantitative and qualitative evaluation criteria. When decision-makers' judgments are under uncertainty, it is relatively difficult for them to provide exact numerical values. The fuzzy set theory is a strong tool which can deal with the uncertainty in case of subjective, incomplete, and vague information. It is easier for an energy planning expert to make an evaluation by using linguistic terms. In this paper, a modified fuzzy TOPSIS methodology is proposed for the selection of the best energy technology alternative. TOPSIS is a multicriteria decision making (MCDM) technique which determines the best alternative by calculating the distances from the positive and negative ideal solutions according to the evaluation scores of the experts. In the proposed methodology, the weights of the selection criteria are determined by fuzzy pairwise comparison matrices. The methodology is applied to an energy planning decision-making problem.
A generalized discrepancy and quadrature error bound An error bound for multidimensional quadrature is derived that includes the Koksma-Hlawka inequality as a special case. This error bound takes the form of a product of two terms. One term, which depends only on the integrand, is dened as a generalized variation. The other term, which depends only on the quadrature rule, is dened as a generalized discrepancy. The generalized discrepancy is a gure of merit for quadrature rules and includes as special cases the Lp-star discrepancy and P that arises in the study of lattice rules.
Rapid speaker adaptation using compressive sensing Speaker-space-based speaker adaptation methods can obtain good performance even if the amount of adaptation data is limited. However, it is difficult to determine the optimal dimension and basis vectors of the subspace for a particular unknown speaker. Conventional methods, such as eigenvoice (EV) and reference speaker weighting (RSW), can only obtain a sub-optimal speaker subspace. In this paper, we present a new speaker-space-based speaker adaptation framework using compressive sensing. The mean vectors of all mixture components of a conventional Gaussian-Mixture-Model-Hidden-Markov-Model (GMM-HMM)-based speech recognition system are concatenated to form a supervector. The speaker adaptation problem is viewed as recovering the speaker-dependent supervector from limited speech signal observations. A redundant speaker dictionary is constructed by a combination of all the training speaker supervectors and the supervectors derived from the EV method. Given the adaptation data, the best subspace for a particular speaker is constructed in a maximum a posterior manner by selecting a proper set of items from this dictionary. Two algorithms, i.e. matching pursuit and l"1 regularized optimization, are adapted to solve this problem. With an efficient redundant basis vector removal mechanism and an iterative updating of the speaker coordinate, the matching pursuit based speaker adaptation method is fast and efficient. The matching pursuit algorithm is greedy and sub-optimal, while direct optimization of the likelihood of the adaptation data with an explicit l"1 regularization term can obtain better approximation of the unknown speaker model. The projected gradient optimization algorithm is adopted and a few iterations of the matching pursuit algorithm can provide a good initial value. Experimental results show that matching pursuit algorithm outperforms the conventional testing methods under all testing conditions. Better performance is obtained when direct l"1 regularized optimization is applied. Both methods can select a proper mixed set of the eigenvoice and reference speaker supervectors automatically for estimation of the unknown speaker models.
Filters of residuated lattices and triangle algebras An important concept in the theory of residuated lattices and other algebraic structures used for formal fuzzy logic, is that of a filter. Filters can be used, amongst others, to define congruence relations. Specific kinds of filters include Boolean filters and prime filters. In this paper, we define several different filters of residuated lattices and triangle algebras and examine their mutual dependencies and connections. Triangle algebras characterize interval-valued residuated lattices.
3D visual experience oriented cross-layer optimized scalable texture plus depth based 3D video streaming over wireless networks. •A 3D experience oriented 3D video cross-layer optimization method is proposed.•Networking-related 3D visual experience model for 3D video streaming is presented.•3D video characteristics are fully considered in the cross-layer optimization.•MAC layer channel allocation and physical layer MCS are systematically optimized.•Results show that our method obtains superior 3D visual experience to others.
1.20581
0.20581
0.041163
0.025768
0.01144
0.002758
0.000714
0.000255
0.000043
0.000002
0
0
0
0