We show that the mistake bound for predicting the nodes of an arbitrary weighted graph is characterized (up to logarithmic factors) by the weighted cutsize of a ran-dom spanning tree of the graph. The cutsize is induced by the unknown adversarial labeling of the graph nodes. In deriving our characterization, we obtain a simple randomized algorithm achieving the optimal mistake bound on any graph. Our algorithm draws a random spanning tree of the original graph and then predicts the nodes of this tree in constant amortized time and linear space. Preliminary experiments on real-world datasets show that our method outperforms both global (Perceptron) and local (majority voting) methods.
Let G = (V,E) be an undirected weighted graph on |V | = n vertices and |E| = m edges. A t-spanner of...
Let G = (V,E) be an undirected weighted graph on |V | = n vertices and |E| = m edges. A t-spanner of...
We study the problem of online prediction of a noisy labeling of a graph with the perceptron. We add...
We show that the mistake bound for predict-ing the nodes of an arbitrary weighted graph is character...
We show that the mistake bound for predicting the nodes of an arbitrary weighted graph is characteri...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We characterize, up to constant factors, the number of mistakes necessary and sufficient for sequen-...
We characterize, up to constant factors, the number of mistakes necessary and sufficient for sequent...
Given an n-vertex weighted tree with structural diameter S and a subset of m ver-tices, we present a...
Let G = (V,E) be an undirected weighted graph on |V | = n vertices and |E| = m edges. A t-spanner of...
Let G = (V,E) be an undirected weighted graph on |V | = n vertices and |E| = m edges. A t-spanner of...
We study the problem of online prediction of a noisy labeling of a graph with the perceptron. We add...
We show that the mistake bound for predict-ing the nodes of an arbitrary weighted graph is character...
We show that the mistake bound for predicting the nodes of an arbitrary weighted graph is characteri...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary...
We characterize, up to constant factors, the number of mistakes necessary and sufficient for sequen-...
We characterize, up to constant factors, the number of mistakes necessary and sufficient for sequent...
Given an n-vertex weighted tree with structural diameter S and a subset of m ver-tices, we present a...
Let G = (V,E) be an undirected weighted graph on |V | = n vertices and |E| = m edges. A t-spanner of...
Let G = (V,E) be an undirected weighted graph on |V | = n vertices and |E| = m edges. A t-spanner of...
We study the problem of online prediction of a noisy labeling of a graph with the perceptron. We add...