_id
stringlengths 40
40
| text
stringlengths 0
10k
|
---|---|
d31798506874705f900e72203515abfaa9278409 | Article history: Received 26 August 2007 Received in revised form 7 May 2008 Accepted 13 May 2008 |
6d96f946aaabc734af7fe3fc4454cf8547fcd5ed | |
1c26786513a0844c3a547118167452bed17abf5d | After providing a brief introduction to the transliteration problem, and highlighting some issues specific to Arabic to English translation, a three phase algorithm is introduced as a computational solution to the problem. The algorithm is based on a Hidden Markov Model approach, but also leverages information available in on-line databases. The algorithm is then evaluated, and shown to achieve accuracy approaching 80%. |
dbc82e5b8b17faec972e1d09c34ec9f9cd1a33ea | In our research on Commonsense reasoning, we have found that an especially important kind of knowledge is knowledge about human goals. Especially when applying Commonsense reasoning to interface agents, we need to recognize goals from user actions (plan recognition), and generate sequences of actions that implement goals (planning). We also often need to answer more general questions about the situations in which goals occur, such as when and where a particular goal might be likely, or how long it is likely to take to achieve. In past work on Commonsense knowledge acquisition, users have been directly asked for such information. Recently, however, another approach has emerged—to entice users into playing games where supplying the knowledge is the means to scoring well in the game, thus motivating the players. This approach has been pioneered by Luis von Ahn and his colleagues, who refer to it as Human Computation. Common Consensus is a fun, self-sustaining web-based game, that both collects and validates Commonsense knowledge about everyday goals. It is based on the structure of the TV game show Family Feud1. A small user study showed that users find the game fun, knowledge quality is very good, and the rate of knowledge collection is rapid. ACM Classification: H.3.3 [INFORMATION STORAGE AND RETRIEVAL]: Information Search and Retrieval; I.2.6 [ARTIFICIAL INTELLIGENCE]: Learning |
f8b1534b26c1a4a30d32aec408614ecff2412156 | |
4c479f8d18badb29ec6a2a49d6ca8e36d833fbe9 | BACKGROUND
Despite its small size, the coccyx has several important functions. Along with being the insertion site for multiple muscles, ligaments, and tendons, it also serves as one leg of the tripod-along with the ischial tuberosities-that provides weight-bearing support to a person in the seated position. The incidence of coccydynia (pain in the region of the coccyx) has not been reported, but factors associated with increased risk of developing coccydynia include obesity and female gender.
METHODS
This article provides an overview of the anatomy, physiology, and treatment of coccydynia.
RESULTS
Conservative treatment is successful in 90% of cases, and many cases resolve without medical treatment. Treatments for refractory cases include pelvic floor rehabilitation, manual manipulation and massage, transcutaneous electrical nerve stimulation, psychotherapy, steroid injections, nerve block, spinal cord stimulation, and surgical procedures.
CONCLUSION
A multidisciplinary approach employing physical therapy, ergonomic adaptations, medications, injections, and, possibly, psychotherapy leads to the greatest chance of success in patients with refractory coccyx pain. Although new surgical techniques are emerging, more research is needed before their efficacy can be established. |
0989bbd8c15f9aac24e8832327df560dc8ec5324 | In the nearly six decades since researchers began to explore methods of creating them, exoskeletons have progressed from the stuff of science fiction to nearly commercialized products. While there are still many challenges associated with exoskeleton development that have yet to be perfected, the advances in the field have been enormous. In this paper, we review the history and discuss the state-of-the-art of lower limb exoskeletons and active orthoses. We provide a design overview of hardware, actuation, sensory, and control systems for most of the devices that have been described in the literature, and end with a discussion of the major advances that have been made and hurdles yet to be overcome. |
4adffe0ebdda59d39e43d42a41e1b6f80164f07e | Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression. |
2a4423b10725e54ad72f4f1fcf77db5bc835f0a6 | There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods. |
dec997b20ebe2b867f68cc5c123d9cb9eafad6bb | Training deep neural networks generally requires massive amounts of data and is very computation intensive. We show here that it may be possible to circumvent the expensive gradient descent procedure and derive the parameters of a neural network directly from properties of the training data. We show that, near convergence, the gradient descent equations for layers close to the input can be linearized and become stochastic equations with noise related to the covariance of data for each class. We derive the distribution of solutions to these equations and discover that it is related to a “supervised principal component analysis.” We implement these results on image datasets MNIST, CIFAR10 and CIFAR100 and find that, indeed, pretrained layers using our findings performs comparable or superior to neural networks of the same size and architecture trained with gradient descent. Moreover, our pretrained layers can often be calculated using a fraction of the training data, owing to the quick convergence of the covariance matrix. Thus, our findings indicate that we can cut the training time both by requiring only a fraction of the data used for gradient descent, and by eliminating layers in the costly backpropagation step of the training. Additionally, these findings partially elucidate the inner workings of deep neural networks and allow us to mathematically calculate optimal solutions for some stages of classification problems, thus significantly boosting our ability to solve such problems efficiently. |