_id
stringlengths
40
40
text
stringlengths
0
10k
40baa5d4632d807cc5841874be73415775b500fd
Traditional two-level high-frequency pulse width modulation (PWM) inverters for motor drives have several problems associated with their high frequency switching which produces common-mode voltage and high voltage change (dV/dt) rates to the motor windings. Multilevel inverters solve these problems because their devices can switch at a much lower frequency. Two different multilevel topologies are identified for use as a converter for electric drives, a cascade inverter with separate dc sources and a back-to-back diode clamped converter. The cascade inverter is a natural fit for large automotive allelectric drives because of the high VA ratings possible and because it uses several levels of dc voltage sources which would be available from batteries or fuel cells. The back-to-back diode clamped converter is ideal where a source of ac voltage is available such as a hybrid electric vehicle. Simulation and experimental results show the superiority of these two converters over PWM based drives.
895fa1357bcfa9b845945c6505a6e48070fd5d89
In this work we propose a secure electronic voting protocol that is suitable for large scale voting over the Internet. The protocol allows a voter to cast his or her ballot anonymously, by exchanging untraceable yet authentic messages. The protocol ensures that (i) only eligible voters are able to cast votes, (ii) a voter is able to cast only one vote, (iii) a voter is able to verify that his or her vote is counted in the final tally, (iv) nobody, other than the voter, is able to link a cast vote with a voter, and (v) if a voter decides not to cast a vote, nobody is able to cast a fraudulent vote in place of the voter. The protocol does not require the cooperation of all registered voters. Neither does it require the use of complex cryptographic techniques like threshold cryptosystems or anonymous channels for casting votes. This is in contrast to other voting protocols that have been proposed in the literature. The protocol uses three agents, other than the voters, for successful operation. However, we do not require any of these agents to be trusted. That is, the agents may be physically co-located or may collude with one another to try to commit a fraud. If a fraud is committed, it can be easily detected and proven, so that the vote can be declared null and void. Although we propose the protocol with electronic voting in mind, the protocol can be used in other applications that involve exchanging an untraceable yet authentic message. Examples of such applications are answering confidential questionnaire anonymously or anonymous financial transactions.
cf9145aa55da660a8d32bf628235c615318463bf
In the last decade, it has become aparent that embedded systems are integral parts of our every day lives. The wireless nature of many embedded applications as well as their omnipresence has made the need for security and privacy preserving mechanisms particularly important. Thus, as FPGAs become integral parts of embedded systems, it is imperative to consider their security as a whole. This contribution provides a state-of-the-art description of security issues on FPGAs, both from the system and implementation perspectives. We discuss the advantages of reconfigurable hardware for cryptographic applications, show potential security problems of FPGAs, and provide a list of open research problems. Moreover, we summarize both public and symmetric-key algorithm implementations on FPGAs.
748eb923d2c384d2b3af82af58d2e6692ef57aa1
Text mining is a new and exciting area of computer science that tries to solve the crisis of information overload by combining techniques from data mining, machine learning, natural language processing, information retrieval, and knowledge management. The Text Mining Handbook presents a comprehensive discussion of the latest techniques in text mining and link detection. In addition to providing an in-depth examination of core text mining and link detection algorithms and operations, the book examines advanced pre-processing techniques, knowledge representation considerations, and visualization approaches, ending with real-world applications.
d044d399049bb9bc6df8cc2a5d72610a95611eed
OBJECTIVE To compare the efficacy of robotic-assisted gait training with the Lokomat to conventional gait training in individuals with subacute stroke. METHODS A total of 63 participants<6 months poststroke with an initial walking speed between 0.1 to 0.6 m/s completed the multicenter, randomized clinical trial. All participants received twenty-four 1-hour sessions of either Lokomat or conventional gait training. Outcome measures were evaluated prior to training, after 12 and 24 sessions, and at a 3-month follow-up exam. Self-selected overground walking speed and distance walked in 6 minutes were the primary outcome measures, whereas secondary outcome measures included balance, mobility and function, cadence and symmetry, level of disability, and quality of life measures. RESULTS Participants who received conventional gait training experienced significantly greater gains in walking speed (P=.002) and distance (P=.03) than those trained on the Lokomat. These differences were maintained at the 3-month follow-up evaluation. Secondary measures were not different between the 2 groups, although a 2-fold greater improvement in cadence was observed in the conventional versus Lokomat group. CONCLUSIONS For subacute stroke participants with moderate to severe gait impairments, the diversity of conventional gait training interventions appears to be more effective than robotic-assisted gait training for facilitating returns in walking ability.
098cc8b16697307a241658d69c213954ede76d59
Using data from 43 users across two platforms, we present a detailed look at smartphone traffic. We find that browsing contributes over half of the traffic, while each of email, media, and maps contribute roughly 10%. We also find that the overhead of lower layer protocols is high because of small transfer sizes. For half of the transfers that use transport-level security, header bytes correspond to 40% of the total. We show that while packet loss is the main factor that limits the throughput of smartphone traffic, larger send buffers at Internet servers can improve the throughput of a quarter of the transfers. Finally, by studying the interaction between smartphone traffic and the radio power management policy, we find that the power consumption of the radio can be reduced by 35% with minimal impact on the performance of packet exchanges.
1e126cee4c1bddbfdd4e36bf91b8b1c2fe8d44c2
This paper describes PowerBooter, an automated power model construction technique that uses built-in battery voltage sensors and knowledge of battery discharge behavior to monitor power consumption while explicitly controlling the power management and activity states of individual components. It requires no external measurement equipment. We also describe PowerTutor, a component power management and activity state introspection based tool that uses the model generated by PowerBooter for online power estimation. PowerBooter is intended to make it quick and easy for application developers and end users to generate power models for new smartphone variants, which each have different power consumption properties and therefore require different power models. PowerTutor is intended to ease the design and selection of power efficient software for embedded systems. Combined, PowerBooter and PowerTutor have the goal of opening power modeling and analysis for more smartphone variants and their users.
3f62fe7de3bf15af1e5871dd8f623db29d8f0c35
Using detailed traces from 255 users, we conduct a comprehensive study of smartphone use. We characterize intentional user activities -- interactions with the device and the applications used -- and the impact of those activities on network and energy usage. We find immense diversity among users. Along all aspects that we study, users differ by one or more orders of magnitude. For instance, the average number of interactions per day varies from 10 to 200, and the average amount of data received per day varies from 1 to 1000 MB. This level of diversity suggests that mechanisms to improve user experience or energy consumption will be more effective if they learn and adapt to user behavior. We find that qualitative similarities exist among users that facilitate the task of learning user behavior. For instance, the relative application popularity for can be modeled using an exponential distribution, with different distribution parameters for different users. We demonstrate the value of adapting to user behavior in the context of a mechanism to predict future energy drain. The 90th percentile error with adaptation is less than half compared to predictions based on average behavior across users.
45654695f5cad20d2be36d45d280af5180004baf
In this article we discuss the design of a new fronthaul interface for future 5G networks. The major shortcomings of current fronthaul solutions are first analyzed, and then a new fronthaul interface called next-generation fronthaul interface (NGFI) is proposed. The design principles for NGFI are presented, including decoupling the fronthaul bandwidth from the number of antennas, decoupling cell and user equipment processing, and focusing on high-performancegain collaborative technologies. NGFI aims to better support key 5G technologies, in particular cloud RAN, network functions virtualization, and large-scale antenna systems. NGFI claims the advantages of reduced bandwidth as well as improved transmission efficiency by exploiting the tidal wave effect on mobile network traffic. The transmission of NGFI is based on Ethernet to enjoy the benefits of flexibility and reliability. The major impact, challenges, and potential solutions of Ethernet-based fronthaul networks are also analyzed. Jitter, latency, and time and frequency synchronization are the major issues to overcome.
a1bbd52c57ad6a36057f5aa69544887261eb1a83
We describe a syntax-based algorithm that automatically builds Finite State Automata (word lattices) from semantically equivalent translation sets. These FSAs are good representations of paraphrases. They can be used to extract lexical and syntactic paraphrase pairs and to generate new, unseen sentences that express the same meaning as the sentences in the input sets. Our FSAs can also predict the correctness of alternative semantic renderings, which may be used to evaluate the quality of translations.
78e2cf228287d7e995c6718338e3ec58dc7cca50
7674e4e66c60a4a31d0b68a07d4ea521cca8a84b
The FuzzyLog is a partially ordered shared log abstraction. Distributed applications can concurrently append to the partial order and play it back. FuzzyLog applications obtain the benefits of an underlying shared log – extracting strong consistency, durability, and failure atomicity in simple ways – without suffering from its drawbacks. By exposing a partial order, the FuzzyLog enables three key capabilities for applications: linear scaling for throughput and capacity (without sacrificing atomicity), weaker consistency guarantees, and tolerance to network partitions. We present Dapple, a distributed implementation of the FuzzyLog abstraction that stores the partial order compactly and supports efficient appends / playback via a new ordering protocol. We implement several data structures and applications over the FuzzyLog, including several map variants as well as a ZooKeeper implementation. Our evaluation shows that these applications are compact, fast, and flexible: they retain the simplicity (100s of lines of code) and strong semantics (durability and failure atomicity) of a shared log design while exploiting the partial order of the FuzzyLog for linear scalability, flexible consistency guarantees (e.g., causal+ consistency), and network partition tolerance. On a 6-node Dapple deployment, our FuzzyLogbased ZooKeeper supports 3M/sec single-key writes, and 150K/sec atomic cross-shard renames.
38bcf0bd4f8c35ff54d292d37cbdca1da677f3f5
earable biosensors (WBS) will permit continuous cardiovascular (CV) monitoring in a number of novel settings. Benefits may be realized in the diagnosis and treatment of a number of major diseases. WBS, in conjunction with appropriate alarm algorithms, can increase surveillance capabilities for CV catastrophe for high-risk subjects. WBS could also play a role in the treatment of chronic diseases, by providing information that enables precise titration of therapy or detecting lapses in patient compliance. WBS could play an important role in the wireless surveillance of people during hazardous operations (military, fire-fighting, etc.), or such sensors could be dispensed during a mass civilian casualty occurrence. Given that CV physio-logic parameters make up the " vital signs " that are the most important information in emergency medical situations, WBS might enable a wireless monitoring system for large numbers of at-risk subjects. This same approach may also have utility in monitoring the waiting room of today's overcrowded emergency departments. For hospital inpatients who require CV monitoring, current biosensor technology typically tethers patients in a tangle of cables, whereas wearable CV sensors could increase inpatient comfort and may even reduce the risk of tripping and falling, a perennial problem for hospital patients who are ill, medicated, and in an unfamiliar setting. On a daily basis, wearable CV sensors could detect a missed dose of medication by sensing untreated elevated blood pressure and could trigger an automated reminder for the patient to take the medication. Moreover, it is important for doctors to titrate the treatment of high blood pressure, since both insufficient therapy as well as excessive therapy (leading to abnormally low blood pressures) increase mortality. However, healthcare providers have only intermittent values of blood pressure on which to base therapy decisions; it is possible that continuous blood pressure monitoring would permit enhanced titration of therapy and reductions in mortality. Similarly, WBS would be able to log the physiologic signature of a patient's exercise efforts (manifested as changes in heart rate and blood pressure), permitting the patient and healthcare provider to assess compliance with a regimen proven to improve health outcomes. For patients with chronic cardiovascular disease, such as heart failure, home monitoring employing WBS may detect exacerbations in very early (and often easily treated) stages, long before the patient progresses to more dangerous levels that necessitate an emergency room visit and costly hospital admission. In this article we will address both technical and clinical …
86c9a59c7c4fcf0d10dbfdb6afd20dd3c5c1426c
Fingerprint classification provides an important indexing mechanism in a fingerprint database. An accurate and consistent classification can greatly reduce fingerprint matching time for a large database. We present a fingerprint classification algorithm which is able to achieve an accuracy better than previously reported in the literature. We classify fingerprints into five categories: whorl, right loop, left loop, arch, and tented arch. The algorithm uses a novel representation (FingerCode) and is based on a two-stage classifier to make a classification. It has been tested on 4,000 images in the NIST-4 database. For the five-class problem, a classification accuracy of 90 percent is achieved (with a 1.8 percent rejection during the feature extraction phase). For the four-class problem (arch and tented arch combined into one class), we are able to achieve a classification accuracy of 94.8 percent (with 1.8 percent rejection). By incorporating a reject option at the classifier, the classification accuracy can be increased to 96 percent for the five-class classification task, and to 97.8 percent for the four-class classification task after a total of 32.5 percent of the images are rejected.
a2ed347d010aeae4ddd116676bdea2e77d942f6e
A fingerprint classification algorithm is presented in this paper. Fingerprints are classified into five categories: arch, tented arch, left loop, right loop and whorl. The algorithm extracts singular points (cores and deltas) in a fingerprint image and performs classification based on the number and locations of the detected singular points. The classifier is invariant to rotation, translation and small amounts of scale changes. The classifier is rule-based, where the rules are generated independent of a given data set. The classifier was tested on 4000 images in the NIST-4 database and on 5400 images in the NIST-9 database. For he NIST-4 database, classification accuracies of 85.4% for the five-class problem and 91.1% for the four-class problem (with arch and tented arch placed in the same category) were achieved. Using a reject option, the four-class classification error can be reduced to less than 6% with 10% fingerprint images rejected. Similar classification performance was obtained on the NIST-9 database.
b07ce649d6f6eb636872527104b0209d3edc8188
3337976b072405933a02f7d912d2b6432de38feb
This paper consists of three parts: a preliminary typology of summaries in general; a description of the current and planned modules and performance of the SUMMARIST automated multilingual text summarization system being built sat ISI, and a discussion of three methods to evaluate summaries. 1. T H E N A T U R E O F S U M M A R I E S Early experimentation in the late 1950's and early 60's suggested that text summarization by computer was feasible though not s traightforward (Luhn, 59; Edmundson, 68). The methods developed then were fairly unsophisticated, relying primarily on surface level phenomena such as sentence position and word frequency counts, and focused on producing extracts (passages selected from the text, reproduced verbatim) rather than abstracts (interpreted portions of the text, newly generated). After a hiatus of some decades, the growing presence of large amounts of online text--in corpora and especially on the Web--renewed the interest in automated text summarization. During these intervening decades, progress in Natural Language Processing (NLP), coupled with great increases of computer memory and speed, made possible more sophisticated techniques, with very encouraging results. In the late 1990's, some relatively small research investments in the US (not more than 10 projects, including commercial efforts at Microsoft, Lexis-Nexis, Oracle, SRA, and TextWise, and university efforts at CMU, NMSU, UPenn, and USC/ISI) over three or four years have produced several systems that exhibit potential marketability, as well as several innovations that promise continued improvement. In addition, several recent workshops, a book collection, and several tutorials testify that automated text summarization has become a hot area. However, when one takes a moment to study the various systems and to consider what has really been achieved, one cannot help being struck by their underlying similarity, by the narrowness of their focus, and by the large numbers of unknown factors that surround the problem. For example, what precisely is a summary? No-one seems to know exactly. In our work, we use summary as the generic term and define it as follows: A summary is a text that is produced out of one or more (possibly multimedia) texts, that contains (some of) the same information of the original text(s), and that is no longer than half of the original text(s). To clarify the picture a little, we follow and extend (Sp~irck Jones, 97) by identifying the following aspects of variation. Any summary can be characterized by (at least) three major classes of characteristics: Invut: characteristics of the source text(s) Source size: single-document v s . multi-document: A single-document summary derives from a single input text (though the summarization process itself may employ information compiled earlier from other texts). A multi-document summary is one text that covers the content of more than one input text, and is usually used only when the input texts are thematically related. Specificity: domain-specific vs. general: When the input texts all pertain to a single domain, it may be appropr ia te to apply domain spec i f i c summarization techniques, focus on specific content, and output specific formats, compared to the general case. A domain-specific summary derives from input text(s) whose theme(s) pertain to a single restricted domain. As such, it can assume less term ambiguity, idiosyncratic word and grammar usage, specialized formatting, etc., and can reflect them in the summary.
25126128faa023d1a65a47abeb8c33219cc8ca5c
We study Nyström type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that these approaches can achieve optimal learning bounds, provided the subsampling level is suitably chosen. These results suggest a simple incremental variant of Nyström Kernel Regularized Least Squares, where the subsampling level implements a form of computational regularization, in the sense that it controls at the same time regularization and computations. Extensive experimental analysis shows that the considered approach achieves state of the art performances on benchmark large scale datasets.
414573bcd1849b4d3ec8a06dd4080b62f1db5607
Distributed denial-of-service (DDoS) attacks present an Internet-wide threat. We propose D-WARD, a DDoS defense system deployed at source-end networks that autonomously detects and stops attacks originating from these networks. Attacks are detected by the constant monitoring of two-way traffic flows between the network and the rest of the Internet and periodic comparison with normal flow models. Mismatching flows are rate-limited in proportion to their aggressiveness. D-WARD offers good service to legitimate traffic even during an attack, while effectively reducing DDoS traffic to a negligible level. A prototype of the system has been built in a Linux router. We show its effectiveness in various attack scenarios, discuss motivations for deployment, and describe associated costs.
705a24f4e1766a44bbba7cf335f74229ed443c7b
Face recognition algorithms commonly assume that face images are well aligned and have a similar pose -- yet in many practical applications it is impossible to meet these conditions. Therefore extending face recognition to unconstrained face images has become an active area of research. To this end, histograms of Local Binary Patterns (LBP) have proven to be highly discriminative descriptors for face recognition. Nonetheless, most LBP-based algorithms use a rigid descriptor matching strategy that is not robust against pose variation and misalignment. We propose two algorithms for face recognition that are designed to deal with pose variations and misalignment. We also incorporate an illumination normalization step that increases robustness against lighting variations. The proposed algorithms use descriptors based on histograms of LBP and perform descriptor matching with spatial pyramid matching (SPM) and Naive Bayes Nearest Neighbor (NBNN), respectively. Our contribution is the inclusion of flexible spatial matching schemes that use an image-to-class relation to provide an improved robustness with respect to intra-class variations. We compare the accuracy of the proposed algorithms against Ahonen's original LBP-based face recognition system and two baseline holistic classifiers on four standard datasets. Our results indicate that the algorithm based on NBNN outperforms the other solutions, and does so more markedly in presence of pose variations.
fb8704210358d0cbf5113c97e1f9f9f03f67e6fc
Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization ( approximately 1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although still a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into picture archiving and communication systems (PACS) have been created. This article gives an overview of available literature in the field of content-based access to medical image data and on the technologies used in the field. Section 1 gives an introduction into generic content-based image retrieval and the technologies used. Section 2 explains the propositions for the use of image retrieval in medical practice and the various approaches. Example systems and application areas are described. Section 3 describes the techniques used in the implemented systems, their datasets and evaluations. Section 4 identifies possible clinical benefits of image retrieval systems in clinical practice as well as in research and education. New research directions are being defined that can prove to be useful. This article also identifies explanations to some of the outlined problems in the field as it looks like many propositions for systems are made from the medical domain and research prototypes are developed in computer science departments using medical datasets. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text-based retrieval methods as they exist at the moment but to complement them with visual search tools.
38919649ae3fd207b96b62e95b3c8c8e69635c7f
This study is a comparison of three routing protocols proposed for wireless mobile ad-hoc networks. The protocols are: Destination Sequenced Distance Vector (DSDV), Ad-hoc On demand Distance Vector (AODV) and Dynamic Source Routing (DSR). Extensive simulations are made on a scenario where nodes moves randomly. Results are presented as a function of a novel mobility metric designed to reflect the relative speeds of the nodes in a scenario. Furthermore, three realistic scenarios are introduced to test the protocols in more specialized contexts. In most simulations the reactive protocols (AODV and DSR) performed significantly better than DSDV. At moderate traffic load DSR performed better than AODV for all tested mobility values, while AODV performed better than DSR at higher traffic loads. The latter is caused by the source routes in DSR data packets, which increase the load on the network. routers and hosts, thus a node may forward packets between other nodes as well as run user applications. Mobile ad-hoc networks have been the focus of many recent research and development efforts. Ad-hoc packet radio networks have so far mainly concerned military applications, where a decentralized network configuration is an operative advantage or even a necessity. Networks using ad-hoc configuration concepts can be used in many military applications, ranging from interconnected wireless access points to networks of wireless devices carried by individuals, e.g., digital maps, sensors attached to the body, voice communication, etc. Combinations of wide range and short range ad-hoc networks seek to provide robust, global coverage, even during adverse operating conditions.
0f7329cf0d388d4c5d5b94ee52ad2385bd2383ce
Supervoxel segmentation has strong potential to be incorporated into early video analysis as superpixel segmentation has in image analysis. However, there are many plausible supervoxel methods and little understanding as to when and where each is most appropriate. Indeed, we are not aware of a single comparative study on supervoxel segmentation. To that end, we study seven supervoxel algorithms, including both off-line and streaming methods, in the context of what we consider to be a good supervoxel: namely, spatiotemporal uniformity, object/region boundary detection, region compression and parsimony. For the evaluation we propose a comprehensive suite of seven quality metrics to measure these desirable supervoxel characteristics. In addition, we evaluate the methods in a supervoxel classification task as a proxy for subsequent high-level uses of the supervoxels in video analysis. We use six existing benchmark video datasets with a variety of content-types and dense human annotations. Our findings have led us to conclusive evidence that the hierarchical graph-based (GBH), segmentation by weighted aggregation (SWA) and temporal superpixels (TSP) methods are the top-performers among the seven methods. They all perform well in terms of segmentation accuracy, but vary in regard to the other desiderata: GBH captures object boundaries best; SWA has the best potential for region compression; and TSP achieves the best undersegmentation error.
50dea03d4feb1797f1d5c260736e1cf7ad6d45ca
INTRODUCTION We report a case of rapidly growing fibroadenoma. PATIENT A 13-year-old girl consulted the outpatient clinic regarding a left breast mass. The mass was diagnosed as fibroadenoma by clinical examinations, and the patient was carefully monitored. The mass enlarged rapidly with each menses and showed a 50% increase in volume four months later. Lumpectomy was performed. The tumor was histologically diagnosed as fibroadenoma organized type and many glandular epithelial cells had positive immunohistochemical staining for anti-estrogen receptor antibody in the nuclei. CONCLUSION The estrogen sensitivity of the tumor could account for the rapid growth.
0674c1e2fd78925a1baa6a28216ee05ed7b48ba0
Proc. of the International Conference on Computer Vision, Corfu (Sept. 1999) An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds.
bbb9c3119edd9daa414fd8f2df5072587bfa3462
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.
18ca2837d280a6b2250024b6b0e59345601064a7
Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.
a3bfe87159938a96d3f2037ff0fe10adca0d21b0
As more software modules and external interfaces are getting added on vehicles, new attacks and vulnerabilities are emerging. Researchers have demonstrated how to compromise in-vehicle Electronic Control Units (ECUs) and control the vehicle maneuver. To counter these vulnerabilities, various types of defense mechanisms have been proposed, but they have not been able to meet the need of strong protection for safety-critical ECUs against in-vehicle network attacks. To mitigate this deficiency, we propose an anomaly-based intrusion detection system (IDS), called Clock-based IDS (CIDS). It measures and then exploits the intervals of periodic in-vehicle messages for fingerprinting ECUs. The thusderived fingerprints are then used for constructing a baseline of ECUs’ clock behaviors with the Recursive Least Squares (RLS) algorithm. Based on this baseline, CIDS uses Cumulative Sum (CUSUM) to detect any abnormal shifts in the identification errors — a clear sign of intrusion. This allows quick identification of in-vehicle network intrusions with a low false-positive rate of 0.055%. Unlike state-of-the-art IDSs, if an attack is detected, CIDS’s fingerprinting of ECUs also facilitates a rootcause analysis; identifying which ECU mounted the attack. Our experiments on a CAN bus prototype and on real vehicles have shown CIDS to be able to detect a wide range of in-vehicle network attacks.
c567bdc35a40e568e0661446ac4f9b397787e40d
A 2.4 GHz interferer-resilient wake-up receiver for ultra-low power wireless sensor nodes uses an uncertain-IF dualconversion topology, combining a distributed multi-stage N-path filtering technique with an unlocked low-Q resonator-referred local oscillator. This structure provides narrow-band selectivity and strong immunity against interferers, while avoiding expensive external resonant components such as BAW resonators or crystals. The 65 nm CMOS receiver prototype provides a sensitivity of -97 dBm and a carrier-to-interferer ratio better than -27 dB at 5 MHz offset, for a data rate of 10 kb/s at a 10-3 bit error rate, while consuming 99 μW from a 0.5 V voltage supply under continuous operation.
703244978b61a709e0ba52f5450083f31e3345ec
In this volume, the authors introduce seven general principles of learning, distilled from the research literature as well as from twenty-seven years of experience working one-on-one with college faculty. They have drawn on research from a breadth of perspectives (cognitive, developmental, and social psychology; educational research; anthropology; demographics; and organizational behavior) to identify a set of key principles underlying learningfrom how effective organization enhances retrieval and use of information to what impacts motivation. These principles provide instructors with an understanding of student learning that can help them see why certain teaching approaches are or are not supporting student learning, generate or refine teaching approaches and strategies that more effectively foster student learning in specific contexts, and transfer and apply these principles to new courses.
52a345a29267107f92aec9260b6f8e8222305039
This paper serves as a companion or extension to the “Inside PageRank” paper by Bianchini et al. [19]. It is a comprehensive survey of all issues associated with PageRank, covering the basic PageRank model, available and recommended solution methods, storage issues, existence, uniqueness, and convergence properties, possible alterations to the basic model, suggested alternatives to the traditional solution methods, sensitivity and conditioning, and finally the updating problem. We introduce a few new results, provide an extensive reference list, and speculate about exciting areas of future research.
0e5c8094d3da52340b58761d441eb809ff96743f
In this paper, we compare the performance of the newly introduced distributed active transformer (DAT) structure to that of conventional on-chip impedance-transformations methods. Their fundamental power-efficiency limitations in the design of high-power fully integrated amplifiers in standard silicon process technologies are analyzed. The DAT is demonstrated to be an efficient impedance-transformation and power-combining method, which combines several low-voltage push-pull amplifiers in series by magnetic coupling. To demonstrate the validity of the new concept, a 2.4-GHz 1.9-W 2-V fully integrated power-amplifier achieving a power-added efficiency of 41% with 50input and output matching has been fabricated using 0.35-μm CMOS transistors Item Type: Article Additional Information: © Copyright 2002 IEEE. Reprinted with permission. Manuscript received May 27, 2001. [Posted online: 2002-08-07] This work was supported by the Intel Corporation, the Army Research Office, the Jet Propulsion Laboratory, Infinion, and the National Science Foundation. The authors thank Conexant Systems for chip fabrication, particularly R. Magoon, F. In’tveld, J. Powell, A. Vo, and K. Moye. K. Potter, D. Ham, and H.Wu, all of the California Institute of Technology (Caltech), Pasadena, deserve special thanks for their assistance. The technical support for CAD tools from Agilent Technologies and Sonnet Software Inc., Liverpool, NY, are also appreciated. “Special Issue on Silicon-Based RF and Microwave Integrated Circuits”, IEEE Transactions on Microwave Theory and Techniques, vol. 50, no. 1, part 2 Subject
14fae9835ae65adfdc434b7b7e761487e7a9548f
It is known that a radial power combiner is very effective in combining a large number of power amplifiers, where high efficiency (greater than 90%) over a relatively wide band can be achieved. However, its current use is limited due to its design complexity. In this paper, we develop a step-by-step design procedure, including both the initial approximate design formulas and suitable models for final accurate design optimization purposes. Based on three-dimensional electromagnetic modeling, predicted results were in excellent agreement with those measured. Practical issues related to the radial-combiner efficiency, its graceful degradation, and the effects of higher order package resonances are discussed here in detail
47fdb5ec9522019ef7e580d59c262b3dc9519b26
The successful demonstration of a 1:4 power divider using microstrip probes and a WR-430 rectangular waveguide is presented. The 15-dB return loss bandwidth of the nonoptimized structure is demonstrated to be 22% and its 0.5-dB insertion loss bandwidth 26%. While realized through conventional machining, such a structure is assembled in a fashion consistent with proven millimeter and submillimeter-wave micromachining techniques. Thus, the structure presents a potential power dividing and power combining architecture, which through micromachining, may be used for applications well above 100GHz.
68218edaf08484871258387e95161a3ce0e6fe67
An eight-device Ka-band solid-state power amplifier has been designed and fabricated using a traveling-wave power-dividing/combining technique. The low-profile slotted-waveguide structure employed in this design provides not only a high power-combining efficiency over a wide bandwidth, but also efficient heat sinking for the active devices. The measured maximum small-signal gain of the eight-device power amplifier is 19.4 dB at 34 GHz with a 3-dB bandwidth of 3.2 GHz (f/sub L/=31.8 GHz, f/sub H/=35 GHz). The measured maximum output power at 1-dB compression (P/sub out/ at 1 dB) from the power amplifier is 33 dBm (/spl sim/2 W) at 32.2 GHz, with a power-combining efficiency of 80%. Furthermore, performance degradation of this power amplifier due to device failures has also been simulated and measured.
db884813d6d764aea836c44f46604128735bffe0
High power, broad bandwidth, high linearity, and low noise are among the most important features in amplifier design. The broad-band spatial power-combining technique addresses all these issues by combining the output power of a large quantity of microwave monolithic integrated circuit (MMIC) amplifiers in a broad-band coaxial waveguide environment, while maintaining good linearity and improving phase noise of the MMIC amplifiers. A coaxial waveguide was used as the host of the combining circuits for broader bandwidth and better uniformity by equally distributing the input power to each element. A new compact coaxial combiner with much smaller size is investigated. Broad-band slotline to microstrip-line transition is integrated for better compatibility with commercial MMIC amplifiers. Thermal simulations are performed and an improved thermal management scheme over previous designs is employed to improve the heat sinking in high-power application. A high-power amplifier using the compact combiner design is built and demonstrated to have a bandwidth from 6 to 17 GHz with 44-W maximum output power. Linearity measurement has shown a high third-order intercept point of 52 dBm. Analysis shows the amplifier has the ability to extend spurious-free dynamic range by 2 3 times. The amplifier also has shown a residual phase floor close to 140 dBc at 10-kHz offset from the carrier with 5–6-dB reductions compared to a single MMIC amplifier it integrates.
e73ee8174589e9326d3b36484f1b95685cb1ca42
A first-of-the-kind 28 GHz antenna solution for the upcoming 5th generation cellular communication is presented in detail. Extensive measurements and simulations ascertain the proposed 28 GHz antenna solution to be highly effective for cellular handsets operating in realistic propagating environments.
4e85503ef0e1559bc197bd9de0625b3792dcaa9b
Network-based attacks have become common and sophisticated. For this reason, intrusion detection systems are now shifting their focus from the hosts and their operating systems to the network itself. Network-based intrusion detection is challenging because network auditing produces large amounts of data, and different events related to a single intrusion may be visible in different places on the network. This paper presents NetSTAT, a new approach to network intrusion detection. By using a formal model of both the network and the attacks, NetSTAT is able to determine which network events have to be monitored and where they can be monitored.
818c13721db30a435044b37014fe7077e5a8a587
Massive data analysis on large clusters presents new opportunities and challenges for query optimization. Data partitioning is crucial to performance in this environment. However, data repartitioning is a very expensive operation so minimizing the number of such operations can yield very significant performance improvements. A query optimizer for this environment must therefore be able to reason about data partitioning including its interaction with sorting and grouping. SCOPE is a SQL-like scripting language used at Microsoft for massive data analysis. A transformation-based optimizer is responsible for converting scripts into efficient execution plans for the Cosmos distributed computing platform. In this paper, we describe how reasoning about data partitioning is incorporated into the SCOPE optimizer. We show how relational operators affect partitioning, sorting and grouping properties and describe how the optimizer reasons about and exploits such properties to avoid unnecessary operations. In most optimizers, consideration of parallel plans is an afterthought done in a postprocessing step. Reasoning about partitioning enables the SCOPE optimizer to fully integrate consideration of parallel, serial and mixed plans into the cost-based optimization. The benefits are illustrated by showing the variety of plans enabled by our approach.
8420f2f686890d9675538ec831dbb43568af1cb3
In order to determine the sentiment polarity of Hinglish text written in Roman script, we experimented with different combinations of feature selection methods and a host of classifiers using term frequency-inverse document frequency feature representation. We carried out in total 840 experiments in order to determine the best classifiers for sentiment expressed in the news and Facebook comments written in Hinglish. We concluded that a triumvirate of term frequency-inverse document frequency-based feature representation, gain ratio based feature selection, and Radial Basis Function Neural Network as the best combination to classify sentiment expressed in the Hinglish text.
c97ebb60531a86bea516d3582758a45ba494de10
To promote tighter collaboration between the IEEE Intelligent Transportation Systems Society and the pervasive computing research community, the authors introduce the ITS Society and present several pervasive computing-related research topics that ITS Society researchers are working on. This department is part of a special issue on Intelligent Transportation.
e91196c1d0234da60314945c4812eda631004d8f
We propose an interactive multimodal framework for language learning. Instead of being passively exposed to large amounts of natural text, our learners (implemented as feed-forward neural networks) engage in cooperative referential games starting from a tabula rasa setup, and thus develop their own language from the need to communicate in order to succeed at the game. Preliminary experiments provide promising results, but also suggest that it is important to ensure that agents trained in this way do not develop an adhoc communication code only effective for the game they are playing.
500b7d63e64e13fa47934ec9ad20fcfe0d4c17a7
Recently, the timing control of high-frequency signals is strongly demanded due to the high integration density in three-dimensional (3D) LTCC-based SiP applications. Therefore, to control the skew or timing delay, new 3D delay lines will be proposed. For frailty of the signal via, we adopt the concept of coaxial line and proposed an advanced signal via structure with quasi coaxial ground (QCOX-GND) vias. We will show the simulated results using EM and circuit simulator.
1a07186bc10592f0330655519ad91652125cd907
We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.
27e38351e48fe4b7da2775bf94341738bc4da07e
Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them.
303b0b6e6812c60944a4ac9914222ac28b0813a2
This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify thecontextual polarityfor a large subset of sentiment expressions, achieving results that are significantly better than baseline.
4eb943bf999ce49e5ebb629d7d0ffee44becff94
Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction.
2069c9389df8bb29b7fedf2c2ccfe7aaf82b2832
Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related auxiliary domains for learning. While most of the existing works in this area are only focused on using the source data with the same representational structure as the target data, in this paper, we push this boundary further by extending a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text documents are arbitrary. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through matrix factorization, and to use the latent semantic features generated by the auxiliary data to build a better image classifier. We empirically verify the effectiveness of our algorithm on the Caltech-256 image dataset.
381231eecd132199821c5aa3ff3f2278f593ea33
a8823ab946321079c63b9bd42f58bd17b96a25e4
Face detection and eyes extraction has an important role in many applications such as face recognition, facial expression analysis, security login etc. Detection of human face and facial structures like eyes, nose are the complex procedure for the computer. This paper proposes an algorithm for face detection and eyes extraction from frontal face images using Sobel edge detection and morphological operations. The proposed approach is divided into three phases; preprocessing, identification of face region, and extraction of eyes. Resizing of images and gray scale image conversion is achieved in preprocessing. Face region identification is accomplished by Sobel edge detection and morphological operations. In the last phase, eyes are extracted from the face region with the help of morphological operations. The experiments are conducted on 120, 75, 40 images of IMM frontal face database, FEI face database and IMM face database respectively. The face detection accuracy is 100%, 100%, 97.50% and the eyes extraction accuracy rate is 92.50%, 90.66%, 92.50% respectively.
3b6911dc5d98faeb79d3d3e60bcdc40cfd7c9273
An aggregate signature scheme is a digital signature that supports aggregation: Given n signatures on n distinct messages from n distinct users, it is possible to aggregate all these signatures into a single short signature. This single signature (and the n original messages) will convince the verifier that the n users did indeed sign the n original messages (i.e., user i signed message Mi for i = 1, . . . , n). In this paper we introduce the concept of an aggregate signature, present security models for such signatures, and give several applications for aggregate signatures. We construct an efficient aggregate signature from a recent short signature scheme based on bilinear maps due to Boneh, Lynn, and Shacham. Aggregate signatures are useful for reducing the size of certificate chains (by aggregating all signatures in the chain) and for reducing message size in secure routing protocols such as SBGP. We also show that aggregate signatures give rise to verifiably encrypted signatures. Such signatures enable the verifier to test that a given ciphertext C is the encryption of a signature on a given message M . Verifiably encrypted signatures are used in contract-signing protocols. Finally, we show that similar ideas can be used to extend the short signature scheme to give simple ring signatures.
6d4fa4b9037b64b8383331583430711be321c587
Sentiment analysis is a growing field of research, driven by both commercial applications and academic interest. In this paper, we explore multiclass classification of diary-like blog posts for the sentiment dimensions of valence and arousal, where the aim of the task is to predict the level of valence and arousal of a post on a ordinal five-level scale, from very negative/low to very positive/high, respectively. We show how to map discrete affective states into ordinal scales in these two dimensions, based on the psychological model of Russell's circumplex model of affect and label a previously available corpus with multidimensional, real-valued annotations. Experimental results using regression and one-versus-all approaches of support vector machine classifiers show that although the latter approach provides better exact ordinal class prediction accuracy, regression techniques tend to make smaller scale errors.
9931c6b050e723f5b2a189dd38c81322ac0511de
We present a review on the current state of publicly available datasets within the human action recognition community; highlighting the revival of pose based methods and recent progress of understanding person-person interaction modeling. We categorize datasets regarding several key properties for usage as a benchmark dataset; including the number of class labels, ground truths provided, and application domain they occupy. We also consider the level of abstraction of each dataset; grouping those that present actions, interactions and higher level semantic activities. The survey identifies key appearance and pose based datasets, noting a tendency for simplistic, emphasized, or scripted action classes that are often readily definable by a stable collection of subaction gestures. There is a clear lack of datasets that provide closely related actions, those that are not implicitly identified via a series of poses and gestures, but rather a dynamic set of interactions. We therefore propose a novel dataset that represents complex conversational interactions between two individuals via 3D pose. 8 pairwise interactions describing 7 separate conversation based scenarios were collected using two Kinect depth sensors. The intention is to provide events that are constructed from numerous primitive actions, interactions and motions, over a period of time; providing a set of subtle action classes that are more representative of the real world, and a challenge to currently developed recognition methodologies. We believe this is among one of the first datasets devoted to conversational interaction classification using 3D pose Preprint submitted to Elsevier October 27, 2015 features and the attributed papers show this task is indeed possible. The full dataset is made publicly available to the research community at [1].
26e6b1675e081a514f4fdc0352d6cb211ba6d9c8
We demonstrate relay attacks on Passive Keyless Entry and Start (PKES) systems used in modern cars. We build two efficient and inexpensive attack realizations, wired and wireless physical-layer relays, that allow the attacker to enter and start a car by relaying messages between the car and the smart key. Our relays are completely independent of the modulation, protocol, or presence of strong authentication and encryption. We perform an extensive evaluation on 10 car models from 8 manufacturers. Our results show that relaying the signal in one direction only (from the car to the key) is sufficient to perform the attack while the true distance between the key and car remains large (tested up to 50 meters, non line-of-sight). We also show that, with our setup, the smart key can be excited from up to 8 meters. This removes the need for the attacker to get close to the key in order to establish the relay. We further analyze and discuss critical system characteristics. Given the generality of the relay attack and the number of evaluated systems, it is likely that all PKES systems based on similar designs are also vulnerable to the same attack. Finally, we propose immediate mitigation measures that minimize the risk of relay attacks as well as recent solutions that may prevent relay attacks while preserving the convenience of use, for which PKES systems were initially introduced.
69d685d0cf85dfe70d87c1548b03961366e83663
We present a noncontact method to monitor blood oxygen saturation (SpO2). The method uses a CMOS camera with a trigger control to allow recording of photoplethysmography (PPG) signals alternatively at two particular wavelengths, and determines the SpO2 from the measured ratios of the pulsatile to the nonpulsatile components of the PPG signals at these wavelengths. The signal-to-noise ratio (SNR) of the SpO2 value depends on the choice of the wavelengths. We found that the combination of orange (λ = 611 nm) and near infrared (λ = 880 nm) provides the best SNR for the noncontact video-based detection method. This combination is different from that used in traditional contact-based SpO2 measurement since the PPG signal strengths and camera quantum efficiencies at these wavelengths are more amenable to SpO2 measurement using a noncontact method. We also conducted a small pilot study to validate the noncontact method over an SpO2 range of 83%-98%. This study results are consistent with those measured using a reference contact SpO2 device (r = 0.936, p <; 0.001). The presented method is particularly suitable for tracking one's health and wellness at home under free-living conditions, and for those who cannot use traditional contact-based PPG devices.
51c88134a668cdfaccda2fe5f88919ac122bceda
Detecting multimedia events in web videos is an emerging hot research area in the fields of multimedia and computer vision. In this paper, we introduce the core methods and technologies of the framework we developed recently for our Event Labeling through Analytic Media Processing (E-LAMP) system to deal with different aspects of the overall problem of event detection. More specifically, we have developed efficient methods for feature extraction so that we are able to handle large collections of video data with thousands of hours of videos. Second, we represent the extracted raw features in a spatial bag-of-words model with more effective tilings such that the spatial layout information of different features and different events can be better captured, thus the overall detection performance can be improved. Third, different from widely used early and late fusion schemes, a novel algorithm is developed to learn a more robust and discriminative intermediate feature representation from multiple features so that better event models can be built upon it. Finally, to tackle the additional challenge of event detection with only very few positive exemplars, we have developed a novel algorithm which is able to effectively adapt the knowledge learnt from auxiliary sources to assist the event detection. Both our empirical results and the official evaluation results on TRECVID MED’11 and MED’12 demonstrate the excellent performance of the integration of these ideas.
10d6b12fa07c7c8d6c8c3f42c7f1c061c131d4c5
We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
2337ff38e6cfb09e28c0958f07e2090c993ef6e8
For many pattern recognition tasks, the ideal input feature would be invariant to multiple confounding properties (such as illumination and viewing angle, in computer vision applications). Recently, deep architectures trained in an unsupervised manner have been proposed as an automatic method for extracting useful features. However, it is difficult to evaluate the learned features by any means other than using them in a classifier. In this paper, we propose a number of empirical tests that directly measure the degree to which these learned features are invariant to different input transformations. We find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images. We find that convolutional deep belief networks learn substantially more invariant features in each layer. These results further justify the use of “deep” vs. “shallower” representations, but suggest that mechanisms beyond merely stacking one autoencoder on top of another may be important for achieving invariance. Our evaluation metrics can also be used to evaluate future work in deep learning, and thus help the development of future algorithms.
31b58ced31f22eab10bd3ee2d9174e7c14c27c01
With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors.
4b605e6a9362485bfe69950432fa1f896e7d19bf
Automatic face recognition technologies have seen significant improvements in performance due to a combination of advances in deep learning and availability of larger datasets for training deep networks. Since recognizing faces is a task that humans are believed to be very good at, it is only natural to compare the relative performance of automated face recognition and humans when processing fully unconstrained facial imagery. In this work, we expand on previous studies of the recognition accuracy of humans and automated systems by performing several novel analyses utilizing unconstrained face imagery. We examine the impact on performance when human recognizers are presented with varying amounts of imagery per subject, immutable attributes such as gender, and circumstantial attributes such as occlusion, illumination, and pose. Results indicate that humans greatly outperform state of the art automated face recognition algorithms on the challenging IJB-A dataset.
a4d510439644d52701f852d9dd34bbd37f4b8b78
The SLEUTH model, based on the Cellular Automata (CA), can be applied to city development simulation in metropolitan areas. In this study the SLEUTH model was used to model the urban expansion and predict the future possible behavior of the urban growth in Tehran. The fundamental data were five Landsat TM and ETM images of 1988, 1992, 1998, 2001 and 2010. Three scenarios were designed to simulate the spatial pattern. The first scenario assumed historical urbanization mode would persist and the only limitations for development were height and slope. The second one was a compact scenario which makes the growth mostly internal and limited the expansion of suburban areas. The last scenario proposed a polycentric urban structure which let the little patches * Corresponding author. Tel.: +98 912 3572913 E-mail address: [email protected]
f19e6e8a06cba5fc8cf234881419de9193bba9d0
Neural Networks are commonly used in classification and decision tasks. In this paper, we focus on the problem of the local confidence of their results. We review some notions from statistical decision theory that offer an insight on the determination and use of confidence measures for classification with Neural Networks. We then present an overview of the existing confidence measures and finally propose a simple measure which combines the benefits of the probabi-listic interpretation of network outputs and the estimation of the quality of the model by bootstrap error estimation. We discuss empirical results on a real-world application and an artificial problem and show that the simplest measure behaves often better than more sophisticated ones, but may be dangerous under certain situations.
4a5be26509557f0a1a911e639868bfe9d002d664
The Manufacturing Messaging Specification (MMS) protocol is widely used in industrial process control applications, but it is poorly documented. In this paper we present an analysis of the MMS protocol in order to improve understanding of MMS in the context of information security. Our findings show that MMS has insufficient security mechanisms, and the meagre security mechanisms that are available are not implemented in commercially available industrial devices.
15a2ef5fac225c864759b28913b313908401043f
In order to gain their customers' trust, software vendors can certify their products according to security standards, e.g., the Common Criteria (ISO 15408). However, a Common Criteria certification requires a comprehensible documentation of the software product. The creation of this documentation results in high costs in terms of time and money. We propose a software development process that supports the creation of the required documentation for a Common Criteria certification. Hence, we do not need to create the documentation after the software is built. Furthermore, we propose to use an enhanced version of the requirements-driven software engineering process called ADIT to discover possible problems with the establishment of Common Criteria documents. We aim to detect these issues before the certification process. Thus, we avoid expensive delays of the certification effort. ADIT provides a seamless development approach that allows consistency checks between different kinds of UML models. ADIT also supports traceability from security requirements to design documents. We illustrate our approach with the development of a smart metering gateway system.
21968ae000669eb4cf03718a0d97e23a6bf75926
Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance.
c8a04d0cbb9f70e86800b11b594c9a05d7b6bac0
61dc8de84e0f4aab21a03833aeadcefa87d6d4e5
Privacy-preserving data aggregation in ad hoc networks is a challenging problem, considering the distributed communication and control requirement, dynamic network topology, unreliable communication links, etc. The difficulty is exaggerated when there exist dishonest nodes, and how to ensure privacy, accuracy, and robustness against dishonest nodes remains a n open issue. Different from the widely used cryptographic approaches, in this paper, we address this challenging proble m by exploiting the distributed consensus technique. We first pr opose a secure consensus-based data aggregation (SCDA) algorith m that guarantees an accurate sum aggregation while preservi ng the privacy of sensitive data. Then, to mitigate the polluti on from dishonest nodes, we propose an Enhanced SCDA (E-SCDA) algorithm that allows neighbors to detect dishonest nodes, and derive the error bound when there are undetectable dishones t nodes. We prove the convergence of both SCDA and E-SCDA. We also prove that the proposed algorithms are(ǫ, σ)-dataprivacy, and obtain the mathematical relationship betweenǫ and σ. Extensive simulations have shown that the proposed algori thms have high accuracy and low complexity, and they are robust against network dynamics and dishonest nodes.
dbde4f47efed72cbb99f412a9a4c17fe39fa04fc
Natural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters.
acdc3d8d8c880bc9b9e10b337b09bed4c0c762d8
Telecommunication systems integrated within garments and wearable products are such methods by which medical devices are making an impact on enhancing healthcare provisions around the clock. These garments when fully developed will be capable of alerting and demanding attention if and when required along with minimizing hospital resources and labour. Furthermore, they can play a major role in preventative ailments, health irregularities and unforeseen heart or brain disorders in apparently healthy individuals. This work presents the feasibility of investigating an Ultra-WideBand (UWB) antenna made from fully textile materials that were used for the substrate as well as the conducting parts of the designed antenna. Simulated and measured results show that the proposed antenna design meets the requirements of wide working bandwidth and provides 17GHz bandwidth with compact size, washable and flexible materials. Results in terms of return loss, bandwidth, radiation pattern, current distribution as well as gain and efficiency are presented to validate the usefulness of the current manuscript design. The work presented here has profound implications for future studies of a standalone suite that may one day help to provide wearer (patient) with such reliable and comfortable medical monitoring techniques. Received 12 April 2011, Accepted 23 May 2011, Scheduled 10 June 2011 * Corresponding author: Mai A. Rahman Osman ([email protected]).
aab8c9514b473c4ec9c47d780b7c79112add9008
Case study as a research strategy often emerges as an obvious option for students and other new researchers who are seeking to undertake a modest scale research project based on their workplace or the comparison of a limited number of organisations. The most challenging aspect of the application of case study research in this context is to lift the investigation from a descriptive account of ‘what happens’ to a piece of research that can lay claim to being a worthwhile, if modest addition to knowledge. This article draws heavily on established textbooks on case study research and related areas, such as Yin, 1994, Hamel et al., 1993, Eaton, 1992, Gomm, 2000, Perry, 1998, and Saunders et al., 2000 but seeks to distil key aspects of case study research in such a way as to encourage new researchers to grapple with and apply some of the key principles of this research approach. The article explains when case study research can be used, research design, data collection, and data analysis, and finally offers suggestions for drawing on the evidence in writing up a report or dissertation.
a088bed7ac41ae77dbb23041626eb8424d96a5ba
This paper describes the Ephyra question answering engine, a modular and extensible framework that allows to integrate multiple approaches to question answering in one system. Our framework can be adapted to languages other than English by replacing language-specific components. It supports the two major approaches to question answering, knowledge annotation and knowledge mining. Ephyra uses the web as a data resource, but could also work with smaller corpora. In addition, we propose a novel approach to question interpretation which abstracts from the original formulation of the question. Text patterns are used to interpret a question and to extract answers from text snippets. Our system automatically learns the patterns for answer extraction, using question-answer pairs as training data. Experimental results revealed the potential of this approach.
227ed02b3e5edf4c5b08539c779eca90683549e6
A great majority of the existing frameworks are inadequate to address their universal applicability in countries with certain socio-economic and technological settings. Though there is so far no “one size fits all” strategy in implementing eGovernment, there are some essential common elements in the transformation. Therefore, this paper attempts to develop a singular sustainable model based on some theories and the lessons learned from existing e-Participation initiatives of developing and developed countries, so that the benefits of ICT can be maximized and greater participation be ensured.
6afe5319630d966c1355f3812f9d4b4b4d6d9fd0
a2c2999b134ba376c5ba3b610900a8d07722ccb3
ab116cf4e1d5ed947f4d762518738305e3a0ab74
64f51fe4f6b078142166395ed209d423454007fb
A large amount of annotated training images is critical for training accurate and robust deep network models but the collection of a large amount of annotated training images is often time-consuming and costly. Image synthesis alleviates this constraint by generating annotated training images automatically by machines which has attracted increasing interest in the recent deep learning research. We develop an innovative image synthesis technique that composes annotated training images by realistically embedding foreground objects of interest (OOI) into background images. The proposed technique consists of two key components that in principle boost the usefulness of the synthesized images in deep network training. The first is context-aware semantic coherence which ensures that the OOI are placed around semantically coherent regions within the background image. The second is harmonious appearance adaptation which ensures that the embedded OOI are agreeable to the surrounding background from both geometry alignment and appearance realism. The proposed technique has been evaluated over two related but very different computer vision challenges, namely, scene text detection and scene text recognition. Experiments over a number of public datasets demonstrate the effectiveness of our proposed image synthesis technique the use of our synthesized images in deep network training is capable of achieving similar or even better scene text detection and scene text recognition performance as compared with using real images.
ceb4040acf7f27b4ca55da61651a14e3a1ef26a8
226cfb67d2d8eba835f2ec695fe28b78b556a19f
The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed. We show that the Bitcoin mining protocol is not incentive-compatible. We present an attack with which colluding miners' revenue is larger than their fair share. The attack can have significant consequences for Bitcoin: Rational miners will prefer to join the attackers, and the colluding group will increase in size until it becomes a majority. At this point, the Bitcoin system ceases to be a decentralized currency. Unless certain assumptions are made, selfish mining may be feasible for any coalition size of colluding miners. We propose a practical modification to the Bitcoin protocol that protects Bitcoin in the general case. It prohibits selfish mining by a coalition that command less than 1/4 of the resources. This threshold is lower than the wrongly assumed 1/2 bound, but better than the current reality where a coalition of any size can compromise the system.
2b00e526490d65f2ec00107fb7bcce0ace5960c7
This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details. 2010 Elsevier B.V. All rights reserved.
839a69a55d862563fe75528ec5d763fb01c09c61
Low-dimensional vector embeddings, computed using LSTMs or simpler techniques, are a popular approach for capturing the “meaning” of text and a form of unsupervised learning useful for downstream tasks. However, their power is not theoretically understood. The current paper derives formal understanding by looking at the subcase of linear embedding schemes. Using the theory of compressed sensing we show that representations combining the constituent word vectors are essentially information-preserving linear measurements of Bag-of-n-Grams (BonG) representations of text. This leads to a new theoretical result about LSTMs: low-dimensional embeddings derived from a low-memory LSTM are provably at least as powerful on classification tasks, up to small error, as a linear classifier over BonG vectors, a result that extensive empirical work has thus far been unable to show. Our experiments support these theoretical findings and establish strong, simple, and unsupervised baselines on standard benchmarks that in some cases are state of the art among word-level methods. We also show a surprising new property of embeddings such as GloVe and word2vec: they form a good sensing matrix for text that is more efficient than random matrices, the standard sparse recovery tool, which may explain why they lead to better representations in practice.
06e04fd496cd805bca69eea2c1977f90afeeef83
Most approaches in algorithmic fairness constrain machine learning methods so the resulting predictions satisfy one of several intuitive notions of fairness. While this may help private companies comply with non-discrimination laws or avoid negative publicity, we believe it is often too little, too late. By the time the training data is collected, individuals in disadvantaged groups have already suffered from discrimination and lost opportunities due to factors out of their control. In the present work we focus instead on interventions such as a new public policy, and in particular, how to maximize their positive effects while improving the fairness of the overall system. We use causal methods to model the effects of interventions, allowing for potential interference–each individual’s outcome may depend on who else receives the intervention. We demonstrate this with an example of allocating a budget of teaching resources using a dataset of schools in New York City.
44dd6443a07f0d139717be74a98988e3ec80beb8
Several well-developed approaches to inductive learning now exist, but each has speci c limitations that are hard to overcome. Multi-strategy learning attempts to tackle this problem by combining multiple methods in one algorithm. This article describes a uni cation of two widely-used empirical approaches: rule induction and instance-based learning. In the new algorithm, instances are treated as maximally speci c rules, and classi cation is performed using a best-match strategy. Rules are learned by gradually generalizing instances until no improvement in apparent accuracy is obtained. Theoretical analysis shows this approach to be e cient. It is implemented in the RISE 3.1 system. In an extensive empirical study, RISE consistently achieves higher accuracies than state-of-the-art representatives of both its parent approaches (PEBLS and CN2), as well as a decision tree learner (C4.5). Lesion studies show that each of RISE's components is essential to this performance. Most signi cantly, in 14 of the 30 domains studied, RISE is more accurate than the best of PEBLS and CN2, showing that a signi cant synergy can be obtained by combining multiple empirical methods.
b38ac03b806a291593c51cb51818ce8e919a1a43
4debb3fe83ea743a888aa2ec8f4252bbe6d0fcb8
Open Source Software (OSS) has become the subject of much commercial interest of late. Certainly, OSS seems to hold much promise in addressing the core issues of the software crisis, namely that of software taking too long to develop, exceeding its budget, and not working very well. Indeed, there have been several examples of significant OSS success stories—the Linux operating system, the Apache web server, the BIND domain name resolution utility, to name but a few. However, little by way of rigorous academic research on OSS has been conducted to date. In this study, a framework was derived from two previous frameworks which have been very influential in the IS field, namely that of Zachman’s IS architecture (ISA) and Checkland’s CATWOE framework from Soft Systems Methodology (SSM). The resulting framework is used to analyze the OSS approach in detail. The potential future of OSS research is also discussed.
4bd48f4438ba7bf731e91cb29508a290e938a1d0
A compact omni-directional antenna of circular polarization (CP) is presented for 2.4 GHz WLAN access-point applications. The antenna consists of four bended monopoles and a feeding network simultaneously exciting these four monopoles. The electrical size of the CP antenna is only λ<sub>0</sub>/5×λ<sub>0</sub>/5×λ<sub>0</sub>/13. The impedance bandwidth (|S<sub>11</sub>|<;-10 dB) is 3.85% (2.392 GHz to 2.486 GHz) and the axial ratio in the azimuth plane is lower than 0.5 dB in the operating band.
0015fa48e4ab633985df789920ef1e0c75d4b7a8
Detection (To appear in the Proceedings of CVPR'97, June 17-19, 1997, Puerto Rico.) Edgar Osunay? Robert Freund? Federico Girosiy yCenter for Biological and Computational Learning and ?Operations Research Center Massachusetts Institute of Technology Cambridge, MA, 02139, U.S.A. Abstract We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classi ers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points.
ca74a59166af72a14af031504e31d86c7953dc91
0122e063ca5f0f9fb9d144d44d41421503252010
Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestlysized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.
f5fca08badb5f182bfc5bc9050e786d40e0196df
A water environmental monitoring system based on a wireless sensor network is proposed. It consists of three parts: data monitoring nodes, data base station and remote monitoring center. This system is suitable for the complex and large-scale water environment monitoring, such as for reservoirs, lakes, rivers, swamps, and shallow or deep groundwaters. This paper is devoted to the explanation and illustration for our new water environment monitoring system design. The system had successfully accomplished the online auto-monitoring of the water temperature and pH value environment of an artificial lake. The system's measurement capacity ranges from 0 to 80 °C for water temperature, with an accuracy of ±0.5 °C; from 0 to 14 on pH value, with an accuracy of ±0.05 pH units. Sensors applicable to different water quality scenarios should be installed at the nodes to meet the monitoring demands for a variety of water environments and to obtain different parameters. The monitoring system thus promises broad applicability prospects.
0969bae35536395aff521f6fbcd9d5ff379664e3
We present a new metric for routing in multi-radio, multi-hop wireless networks. We focus on wireless networks with stationary nodes, such as community wireless networks.The goal of the metric is to choose a high-throughput path between a source and a destination. Our metric assigns weights to individual links based on the Expected Transmission Time (ETT) of a packet over the link. The ETT is a function of the loss rate and the bandwidth of the link. The individual link weights are combined into a path metric called Weighted Cumulative ETT (WCETT) that explicitly accounts for the interference among links that use the same channel. The WCETT metric is incorporated into a routing protocol that we call Multi-Radio Link-Quality Source Routing.We studied the performance of our metric by implementing it in a wireless testbed consisting of 23 nodes, each equipped with two 802.11 wireless cards. We find that in a multi-radio environment, our metric significantly outperforms previously-proposed routing metrics by making judicious use of the second radio.
3a01f9933066f0950435a509c2b7bf427a1ebd7f
In this paper we present a new approach for data exfiltration by leaking data from monitor's LED to Smartphone's camera. The new approach may be used by attackers to leak valuable information from the organization as part of an Advanced Persistent Threat (APT). The proof of concept that was developed is described in the paper followed by a description of an experiment that demonstrates that practically people are not aware of the attack. We propose ways that will facilitate the detection of such threats and some possible countermeasures.
698b8181cd613a72adeac0d75252afe7f57a5180
We present two new parallel implementations of the tree-ensemble algorithms Random Forest (RF) and Extremely randomized trees (ERT) for emerging many-core platforms, e.g., contemporary graphics cards suitable for general-purpose computing (GPGPU). Random Forest and Extremely randomized trees are ensemble learners for classification and regression. They operate by constructing a multitude of decision trees at training time and outputting a prediction by comparing the outputs of the individual trees. Thanks to the inherent parallelism of the task, an obvious platform for its computation is to employ contemporary GPUs with a large number of processing cores. Previous parallel algorithms for Random Forests in the literature are either designed for traditional multi-core CPU platforms or early history GPUs with simpler hardware architecture and relatively few number of cores. The new parallel algorithms are designed for contemporary GPUs with a large number of cores and take into account aspects of the newer hardware architectures as memory hierarchy and thread scheduling. They are implemented using the C/C++ language and the CUDA interface for best possible performance on NVidia-based GPUs. An experimental study comparing with the most important previous solutions for CPU and GPU platforms shows significant improvement for the new implementations, often with several magnitudes.
1b4e04381ddd2afab1660437931cd62468370a98
Text corpora which are tagged with part-of-speech information are useful in many areas of linguistic research. In this paper, a new part-of-speech tagging method hased on neural networks (Net-Tagger) is presented and its performance is compared to that of a llMM-tagger (Cutting et al., 1992) and a trigrambased tagger (Kempe, 1993). It is shown that the Net-Tagger performs as well as the trigram-based tagger and better than the iIMM-tagger.
68ba338be70fd3c5bdbc1c271243740f2e0a0f0c
We investigate the problem of generating fast approximate a nswers to queries posed to large sparse binary data sets. We focus in particular on probabilistic mode l-based approaches to this problem and develop a number of techniques that are significantly more accurate t han a baseline independence model. In particular, we introduce two techniques for building probabil ist c models from frequent itemsets: the itemset maximum entropy method, and the itemset inclusion-exclusi on model. In the maximum entropy method we treat itemsets as constraints on the distribution of the q uery variables and use the maximum entropy principle to build a joint probability model for the query at tributes online. In the inclusion-exclusion model itemsets and their frequencies are stored in a data structur e alled an ADtree that supports an efficient implementation of the inclusion-exclusion principle in orde r to answer the query. We empirically compare these two itemset-based models to direct querying of the ori ginal data, querying of samples of the original data, as well as other probabilistic models such as the indep endence model, the Chow-Liu tree model, and the Bernoulli mixture model. These models are able to handle high-dimensionality (hundreds or thousands of attributes), whereas most other work on this topic has foc used on relatively low-dimensional OLAP problems. Experimental results on both simulated and realwor d transaction data sets illustrate various fundamental tradeoffs between approximation error, model complexity, and the online time required to compute a query answer.
90522a98ccce3aa0ce20b4dfedb76518b886ed96
Special thanks to Robert Skipper and Aaron Hyman for their assistance on an earlier version of this manuscript. Also thanks to Shaun McQuitty, Robin Peterson, Chuck Pickett, Kevin Shanahan, and the Journal of Business Research editors and reviewers, for their helpful comments. An earlier version of this manuscript won the Shaw Award for best paper presented at 2001 Society for Marketing Advances conference. An abridged version of this manuscript has been accepted for publication in Journal of Business Research.
2e0db4d4c8bdc7e11541b362cb9f8972f66563ab
05c025af60aeab10a3069256674325802c844212
We propose the Encoder-Recurrent-Decoder (ERD) model for recognition and prediction of human body pose in videos and motion capture. The ERD model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers. We test instantiations of ERD architectures in the tasks of motion capture (mocap) generation, body pose labeling and body pose forecasting in videos. Our model handles mocap training data across multiple subjects and activity domains, and synthesizes novel motions while avoiding drifting for long periods of time. For human pose labeling, ERD outperforms a per frame body part detector by resolving left-right body part confusions. For video pose forecasting, ERD predicts body joint displacements across a temporal horizon of 400ms and outperforms a first order motion model based on optical flow. ERDs extend previous Long Short Term Memory (LSTM) models in the literature to jointly learn representations and their dynamics. Our experiments show such representation learning is crucial for both labeling and prediction in space-time. We find this is a distinguishing feature between the spatio-temporal visual domain in comparison to 1D text, speech or handwriting, where straightforward hard coded representations have shown excellent results when directly combined with recurrent units [31].
092b64ce89a7ec652da935758f5c6d59499cde6e
We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m.
ba4a037153bff392b1e56a4109de4b04521f17b2
Crisis informatics investigates how society's pervasive access to technology is transforming how it responds to mass emergency events. To study this transformation, researchers require access to large sets of data that because of their volume and heterogeneous nature are difficult to collect and analyze. To address this concern, we have designed and implemented an environment - EPIC Analyze - that supports researchers with the collection and analysis of social media data. Our research has identified the types of components - such as NoSQL, MapReduce, caching, and search - needed to ensure that these services are reliable, scalable, extensible, and efficient. We describe the design challenges encountered - such as data modeling, time vs. Space tradeoffs, and the need for a useful and usable system - when building EPIC Analyze and discuss its scalability, performance, and functionality.
4416236e5ee4239e86e3cf3db6a2d1a2ff2ae720
Modern analytics applications combine multiple functions from different libraries and frameworks to build increasingly complex workflows. Even though each function may achieve high performance in isolation, the performance of the combined workflow is often an order of magnitude below hardware limits due to extensive data movement across the functions. To address this problem, we propose Weld, a runtime for data-intensive applications that optimizes across disjoint libraries and functions. Weld uses a common intermediate representation to capture the structure of diverse dataparallel workloads, including SQL, machine learning and graph analytics. It then performs key data movement optimizations and generates efficient parallel code for the whole workflow. Weld can be integrated incrementally into existing frameworks like TensorFlow, Apache Spark, NumPy and Pandas without changing their user-facing APIs. We show that Weld can speed up these frameworks, as well as applications that combine them, by up to 30×.