This paper tackles the supervised induction of a distance from examples described as Horn clauses or constrained clauses. In opposition to syntax-driven approaches, this approach is discrimination-driven: it proceeds by defining a small set of complex discriminant hypotheses. These hypotheses serve as new concepts, used to redescribe the initial examples. Further, this redescription can be embedded into the space of natural integers, and a distance between examples thus naturally follows. This distance can be used for classification via a k-nearest-neighbor process. Experiments on the mutagenesis dataset validate the approach, in terms of predictive accuracy, computational cost, and robustness with respect to the parameters of the algorithm
The k-nearest neighbour (k-NN) classifier is one of the oldest and most important supervised learnin...
Normally the distance function used in classification in the k-Nearest Neighbors algorithm is the eu...
We show how to learn aMahanalobis distance metric for k-nearest neigh-bor (kNN) classification by se...
A distance on the problem domain allows one to tackle some typical goals of machine learning, e.g. c...
. A distance on the problem domain allows one to tackle some typical goals of machine learning, e.g....
Supervised classification involves many heuristics, including the ideas of decision tree, k-nearest ...
Several learning systems, such as systems based on clustering and instance based learning, use a mea...
© Springer-Verlag Berlin Heidelberg 1998. Several learning systems, such as systems based on cluster...
Learning in first-order logic (FOL) languages suffers from a specific difficulty: both induction and...
Learning in first-order logic (FOL) languages suffers from a specific difficulty: both induction and...
The optimal distance measure for a given discrimination task under the nearest neighbor framework ha...
The basic concepts of distance based classification are introduced in terms of clear-cut example sys...
A definition of distance measure between structural descriptions, which is based on a probabilistic ...
The selection of the distance measure to separate the objects of the knowledge space is critical in ...
The k-nearest neighbour (k-NN) classifier is one of the oldest and most important supervised learnin...
Normally the distance function used in classification in the k-Nearest Neighbors algorithm is the eu...
We show how to learn aMahanalobis distance metric for k-nearest neigh-bor (kNN) classification by se...
A distance on the problem domain allows one to tackle some typical goals of machine learning, e.g. c...
. A distance on the problem domain allows one to tackle some typical goals of machine learning, e.g....
Supervised classification involves many heuristics, including the ideas of decision tree, k-nearest ...
Several learning systems, such as systems based on clustering and instance based learning, use a mea...
© Springer-Verlag Berlin Heidelberg 1998. Several learning systems, such as systems based on cluster...
Learning in first-order logic (FOL) languages suffers from a specific difficulty: both induction and...
Learning in first-order logic (FOL) languages suffers from a specific difficulty: both induction and...
The optimal distance measure for a given discrimination task under the nearest neighbor framework ha...
The basic concepts of distance based classification are introduced in terms of clear-cut example sys...
A definition of distance measure between structural descriptions, which is based on a probabilistic ...
The selection of the distance measure to separate the objects of the knowledge space is critical in ...
The k-nearest neighbour (k-NN) classifier is one of the oldest and most important supervised learnin...
Normally the distance function used in classification in the k-Nearest Neighbors algorithm is the eu...
We show how to learn aMahanalobis distance metric for k-nearest neigh-bor (kNN) classification by se...