Despite the convexity of structured max-margin objectives (Taskar et al., 2004; Tsochantaridis et al., 2004), the many ways to optimize them are not equally ef-fective in practice. We compare a range of online optimization methods over a vari-ety of structured NLP tasks (coreference, summarization, parsing, etc) and find sev-eral broad trends. First, margin methods do tend to outperform both likelihood and the perceptron. Second, for max-margin objectives, primal optimization methods are often more robust and progress faster than dual methods. This advantage is most pronounced for tasks with dense or continuous-valued features. Overall, we argue for a particularly simple online pri-mal subgradient descent method that, de-spite being rarely ...
Classical optimization techniques have found widespread use in machine learning. Convex optimization...
Logistic models are commonly used for binary classification tasks. The success of such models has of...
120 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.Third, we address an importan...
Abstract. In max-margin learning, the system aims at es-tablishing a solution as robust as possible....
We propose a structured learning approach, max-margin structure (MMS), which is targeted at natural ...
Recent theoretical results have shown that the generalization performance of thresholded convex comb...
Perceptron like large margin algorithms are introduced for the experiments with various margin selec...
We describe a new incremental algorithm for training linear thresh-old functions: the Relaxed Online...
A new incremental learning algorithm is described which approximates the maximal margin hyperplane w...
Motivated by the success of large margin methods in supervised learning, maximum margin clustering (...
International audienceThe foundational concept of Max-Margin in machine learning is ill-posed for ou...
We propose a new online learning algorithm which provably approximates maximum margin classifiers wi...
Motivated by the success of large margin methods in supervised learning, maximum margin clustering (...
A new incremental learning algorithm is described which approximates the maximal margin hyperplane ...
Many tasks in Natural Language Processing (NLP) can be formulated as the assignment of a label to an...
Classical optimization techniques have found widespread use in machine learning. Convex optimization...
Logistic models are commonly used for binary classification tasks. The success of such models has of...
120 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.Third, we address an importan...
Abstract. In max-margin learning, the system aims at es-tablishing a solution as robust as possible....
We propose a structured learning approach, max-margin structure (MMS), which is targeted at natural ...
Recent theoretical results have shown that the generalization performance of thresholded convex comb...
Perceptron like large margin algorithms are introduced for the experiments with various margin selec...
We describe a new incremental algorithm for training linear thresh-old functions: the Relaxed Online...
A new incremental learning algorithm is described which approximates the maximal margin hyperplane w...
Motivated by the success of large margin methods in supervised learning, maximum margin clustering (...
International audienceThe foundational concept of Max-Margin in machine learning is ill-posed for ou...
We propose a new online learning algorithm which provably approximates maximum margin classifiers wi...
Motivated by the success of large margin methods in supervised learning, maximum margin clustering (...
A new incremental learning algorithm is described which approximates the maximal margin hyperplane ...
Many tasks in Natural Language Processing (NLP) can be formulated as the assignment of a label to an...
Classical optimization techniques have found widespread use in machine learning. Convex optimization...
Logistic models are commonly used for binary classification tasks. The success of such models has of...
120 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.Third, we address an importan...