Probabilistic networks, which provide compact descriptions of complex stochastic relationships among several random variables, are rapidly becoming the tool of choice for uncertain reasoning in artificial intelligence. We show that networks with fixed structure containing hidden variables can be learned automatically from data using a gradient-descent mechanism similar to that used in neural networks. We also extend the method to networks with intensionally represented distributions, including networks with continuous variables and dynamic probabilistic networks. Because probabilistic networks provide explicit representations of causal structure, human experts can easily contribute prior knowledge to the training process, thereby significan...
Introduction The work reported here began with the desire to find a network architecture that shared...
Probabilistic networks are now fairly well established as practical representations of knowl-edge fo...
We propose a technique for increasing the efficiency of gradient-based inference and learning in Bay...
. Probabilistic networks (also known as Bayesian belief networks) allow a compact description of com...
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic mo...
Causal Probabilistic Networks (CPN), a method of reasoning using probabilities, has become popular o...
. We introduce a method for inducing the structure of (causal) possibilistic networks from database...
Dynamic probabilistic networks are a compact representation of complex stochastic processes. In this...
International audienceIn the context of sensory or higher-level cognitive processing, we present a r...
Deep Learning-based models are becoming more and more relevant for an increasing number of applicati...
We examine a graphical representation of uncertain knowledge called a Bayesian network. The represen...
Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potenti...
Understanding the relationship between connectionist and probabilistic models is important for evalu...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
Introduction The work reported here began with the desire to find a network architecture that shared...
Probabilistic networks are now fairly well established as practical representations of knowl-edge fo...
We propose a technique for increasing the efficiency of gradient-based inference and learning in Bay...
. Probabilistic networks (also known as Bayesian belief networks) allow a compact description of com...
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic mo...
Causal Probabilistic Networks (CPN), a method of reasoning using probabilities, has become popular o...
. We introduce a method for inducing the structure of (causal) possibilistic networks from database...
Dynamic probabilistic networks are a compact representation of complex stochastic processes. In this...
International audienceIn the context of sensory or higher-level cognitive processing, we present a r...
Deep Learning-based models are becoming more and more relevant for an increasing number of applicati...
We examine a graphical representation of uncertain knowledge called a Bayesian network. The represen...
Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potenti...
Understanding the relationship between connectionist and probabilistic models is important for evalu...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
Introduction The work reported here began with the desire to find a network architecture that shared...
Probabilistic networks are now fairly well established as practical representations of knowl-edge fo...
We propose a technique for increasing the efficiency of gradient-based inference and learning in Bay...