Bregman divergences generalize measures such as the squared Euclidean distance and the KL divergence, and arise throughout many areas of machine learning. In this paper, we focus on the problem of approximating an arbitrary Bregman divergence from supervision, and we provide a well-principled approach to analyzing such approximations. We develop a formulation and algorithm for learning arbitrary Bregman divergences based on approximating their underlying convex generating function via a piecewise linear function. We provide theoretical approximation bounds using our parameterization and show that the generalization error Op(m^-1/2) for metric learning using our framework matches the known generalization error in the strictly less g...
International audienceUsing a trimming approach, we investigate a k-means type method based on Bregm...
Learning distance functions with side information plays a key role in many data mining applications....
In this paper, we present an information-theoretic approach to learning a Mahalanobis distance funct...
Functional Bregman divergences are an important class of divergences in machine learning that genera...
Deep metric learning techniques have been used for visual representation in various supervised and u...
Many metric learning tasks, such as triplet learning, nearest neighbor retrieval, and visualization,...
Nonparametric convex regression has been extensively studied over the last two decades. It has been ...
Bregman divergences play a central role in the design and analysis of a range of machine learning al...
A wide variety of distortion functions are used for clustering, e.g., squared Euclidean distance, Ma...
A wide variety of distortion functions are used for clustering, e.g., squared Euclidean distance, Ma...
Bregman divergence is an important class of divergence functions in Machine Learning. Many well-know...
International audienceThe scope of the well-known $k$-means algorithm has been broadly extended with...
We introduce a class of discrete divergences on sets (equivalently binary vectors) that we call the ...
We review Bregman divergences and use them in clustering algorithms which we have previously develop...
We develop an algorithm for efficient range search when the notion of dissimilarity is given by a Br...
International audienceUsing a trimming approach, we investigate a k-means type method based on Bregm...
Learning distance functions with side information plays a key role in many data mining applications....
In this paper, we present an information-theoretic approach to learning a Mahalanobis distance funct...
Functional Bregman divergences are an important class of divergences in machine learning that genera...
Deep metric learning techniques have been used for visual representation in various supervised and u...
Many metric learning tasks, such as triplet learning, nearest neighbor retrieval, and visualization,...
Nonparametric convex regression has been extensively studied over the last two decades. It has been ...
Bregman divergences play a central role in the design and analysis of a range of machine learning al...
A wide variety of distortion functions are used for clustering, e.g., squared Euclidean distance, Ma...
A wide variety of distortion functions are used for clustering, e.g., squared Euclidean distance, Ma...
Bregman divergence is an important class of divergence functions in Machine Learning. Many well-know...
International audienceThe scope of the well-known $k$-means algorithm has been broadly extended with...
We introduce a class of discrete divergences on sets (equivalently binary vectors) that we call the ...
We review Bregman divergences and use them in clustering algorithms which we have previously develop...
We develop an algorithm for efficient range search when the notion of dissimilarity is given by a Br...
International audienceUsing a trimming approach, we investigate a k-means type method based on Bregm...
Learning distance functions with side information plays a key role in many data mining applications....
In this paper, we present an information-theoretic approach to learning a Mahalanobis distance funct...