In expert systems, we elicit the probabilities of different statements from the experts. However, to adequately use the expert system, we also need to know the probabilities of different propositional combinations of the experts\u27 statements -- i.e., we need to know the corresponding joint distribution. The problem is that there are exponentially many such combinations, and it is not practically possible to elicit all their probabilities from the experts. So, we need to estimate this joint distribution based on the available information. For this purpose, many practitioners use heuristic approaches -- e.g., the t-norm approach of fuzzy logic. However, this is a particular case of a situation for which the maximum entropy approach has been...
This paper is on the combination of two powerful approaches to uncertain reasoning: logic programmin...
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of pe...
In many practical situations, we only have partial information about the probabilities; this means t...
In many practical situations, we have only partial information about the probabilities. In some case...
In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or ...
In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or...
In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or ...
Abstract—In many practical situations, we have only partial information about the probabilities. In ...
This paper presents a new method for calculating the conditional probability of any multi-valued pre...
This paper presents a maximum entropy framework for the aggregation of expert opinions where the exp...
In this paper, we consider two different practical problems that turned to be mathematically similar...
Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian) network a...
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of pe...
This paper is a review of a particular approach to the method of maximum entropy as a general framew...
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of pe...
This paper is on the combination of two powerful approaches to uncertain reasoning: logic programmin...
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of pe...
In many practical situations, we only have partial information about the probabilities; this means t...
In many practical situations, we have only partial information about the probabilities. In some case...
In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or ...
In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or...
In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or ...
Abstract—In many practical situations, we have only partial information about the probabilities. In ...
This paper presents a new method for calculating the conditional probability of any multi-valued pre...
This paper presents a maximum entropy framework for the aggregation of expert opinions where the exp...
In this paper, we consider two different practical problems that turned to be mathematically similar...
Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian) network a...
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of pe...
This paper is a review of a particular approach to the method of maximum entropy as a general framew...
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of pe...
This paper is on the combination of two powerful approaches to uncertain reasoning: logic programmin...
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of pe...
In many practical situations, we only have partial information about the probabilities; this means t...