We formulate the metric learning problem as that of minimizing the differential relative entropy between two multivariate Gaussians under constraints on the Mahalanobis distance function. Via a surprising equivalence, we show that this problem can be solved as a low-rank kernel learning problem. Specifically, we minimize the Burg divergence of a low-rank kernel to an input kernel, subject to pairwise distance constraints. Our approach has several advantages over existing methods. First, we present a natural information-theoretic formulation for the problem. Second, the algorithm utilizes the methods developed by Kulis et al. [6], which do not involve any eigenvector computation; in particular, the running time of our method is faster than m...
Abstract. We propose a new method for local metric learning based on a conical combination of Mahala...
In this paper we present two related, kernelbased Distance Metric Learning (DML) methods. Their resp...
A new family of kernels for statistical learning is introduced that exploits the geometric structur...
We formulate the metric learning problem as that of minimizing the differential relative entropy bet...
We formulate the metric learning problem as that of minimizing the differential relative entropy bet...
In this paper, we present a novel two-stage metric learning algorithm. We first map each learning in...
textA large number of machine learning algorithms are critically dependent on the underlying distanc...
We propose a general information-theoretic ap-proach called SERAPH (SEmi-supervised metRic leArning ...
Over the past decades, distance metric learning has attracted a lot of interest in machine learning ...
Recent studies [1]-[5] have suggested using constraints in the form of relative distance comparisons...
Many learning algorithms use a metric defined over the input space as a principal tool, and their pe...
Abstract—This paper introduces a supervised metric learn-ing algorithm, called kernel density metric...
Mahalanobis Metric Learning (MML) has been actively studied recently in machine learning community. ...
Choosing a distance preserving measure or metric is fun-damental to many signal processing algorithm...
Abstract. The contributions of this work are threefold. First, various metric learning techniques ar...
Abstract. We propose a new method for local metric learning based on a conical combination of Mahala...
In this paper we present two related, kernelbased Distance Metric Learning (DML) methods. Their resp...
A new family of kernels for statistical learning is introduced that exploits the geometric structur...
We formulate the metric learning problem as that of minimizing the differential relative entropy bet...
We formulate the metric learning problem as that of minimizing the differential relative entropy bet...
In this paper, we present a novel two-stage metric learning algorithm. We first map each learning in...
textA large number of machine learning algorithms are critically dependent on the underlying distanc...
We propose a general information-theoretic ap-proach called SERAPH (SEmi-supervised metRic leArning ...
Over the past decades, distance metric learning has attracted a lot of interest in machine learning ...
Recent studies [1]-[5] have suggested using constraints in the form of relative distance comparisons...
Many learning algorithms use a metric defined over the input space as a principal tool, and their pe...
Abstract—This paper introduces a supervised metric learn-ing algorithm, called kernel density metric...
Mahalanobis Metric Learning (MML) has been actively studied recently in machine learning community. ...
Choosing a distance preserving measure or metric is fun-damental to many signal processing algorithm...
Abstract. The contributions of this work are threefold. First, various metric learning techniques ar...
Abstract. We propose a new method for local metric learning based on a conical combination of Mahala...
In this paper we present two related, kernelbased Distance Metric Learning (DML) methods. Their resp...
A new family of kernels for statistical learning is introduced that exploits the geometric structur...