Refinement operators for theories avoid the problems related to the myopia of many relational learning algorithms based on the operators that refine single clauses. However, the non-existence of ideal refinement operators has been proven for the standard clausal search spaces based on theta-subsumption or logical implication, which scales up to the spaces of theories. By adopting different generalization models constrained by the assumption of object identity, we extend the theoretical results on the existence of ideal refinement operators for spaces of clauses to the case of spaces of theories
textabstractIn his famous Model Inference System, Shapiro [1981] uses so-called refinement operators...
In the paper, we present some learning tasks that cannot be solved by two wellknown systems, FOIL an...
We survey operations on (possibly infinite) relational structures that are compatible with logical t...
Refinement operators for theories avoid the problems related to the myopia of many relational learni...
We present a framework for theory refinement operators fulfilling some desirable properties in order...
We present a framework for theory refinement operators ful- filling properties that ensure the effic...
A framework for theory refinement is presented pursuing the efficiency and effectiveness of learning...
textabstractInductive learning models [Plotkin 1971; Shapiro 1981] often use a search space of claus...
The adoption of the object identity bias for weakening implication has lead to the definition of OI-...
Weakening implication by assuming the object identity bias allows for both a model-theoretical and a...
AbstractWithin Inductive Logic Programming, refinement operators compute a set of specializations or...
In this paper we present two results regarding refinement operators. The first is that it does not ...
Abstract. Refinement operators are frequently used in the area of multirelational learning (Inductiv...
In the context of frequent pattern discovery, we present a generality relation, called thetaOI-subs...
This paper presents a system for revising hierarchical first-order logical theories, called INCR/H. ...
textabstractIn his famous Model Inference System, Shapiro [1981] uses so-called refinement operators...
In the paper, we present some learning tasks that cannot be solved by two wellknown systems, FOIL an...
We survey operations on (possibly infinite) relational structures that are compatible with logical t...
Refinement operators for theories avoid the problems related to the myopia of many relational learni...
We present a framework for theory refinement operators fulfilling some desirable properties in order...
We present a framework for theory refinement operators ful- filling properties that ensure the effic...
A framework for theory refinement is presented pursuing the efficiency and effectiveness of learning...
textabstractInductive learning models [Plotkin 1971; Shapiro 1981] often use a search space of claus...
The adoption of the object identity bias for weakening implication has lead to the definition of OI-...
Weakening implication by assuming the object identity bias allows for both a model-theoretical and a...
AbstractWithin Inductive Logic Programming, refinement operators compute a set of specializations or...
In this paper we present two results regarding refinement operators. The first is that it does not ...
Abstract. Refinement operators are frequently used in the area of multirelational learning (Inductiv...
In the context of frequent pattern discovery, we present a generality relation, called thetaOI-subs...
This paper presents a system for revising hierarchical first-order logical theories, called INCR/H. ...
textabstractIn his famous Model Inference System, Shapiro [1981] uses so-called refinement operators...
In the paper, we present some learning tasks that cannot be solved by two wellknown systems, FOIL an...
We survey operations on (possibly infinite) relational structures that are compatible with logical t...