We describe an algorithm for adaptive inference in probabilistic programs. Dur-ing sampling, the algorithm accumulates information about the local probability distributions that compose the program’s overall distribution. We use this informa-tion to construct targeted samples: given a value for an intermediate expression, we stochastically invert each of the steps giving rise to this value, sampling back-wards in order to assign values to random choices such that we get a likely parse of the intermediate value. We propose this algorithm for importance sampling and as a means of constructing blocked proposals for a Metropolis-Hastings sampler.
The aim of Probabilistic Programming (PP) is to automate inference in probabilistic models. One effi...
In this paper we present a method for automatically deriving a Reversible Jump Markov chain Monte Ca...
Motivated by the problem of amortized inference in large-scale simulators, we introduce a probabilis...
Improving efficiency of the importance sampler is at the centre of research on Monte Carlo methods. ...
Algorithms for exact and approximate inferencein stochastic logic programs (SLPs) are presented, bas...
Probabilistic modeling lets us infer, predict and make decisions based on incomplete or noisy data. ...
We describe a class of algorithms for amortized inference in Bayesian networks. In this setting, we ...
Probabilistic models used in quantitative sciences have historically co-evolved with methods for per...
Abstract. When dealing with datasets containing a billion instances or with sim-ulations that requir...
We present a new semantics sensitive sampling algorithm for probabilistic pro-grams, which are “usua...
Abstract. When dealing with datasets containing a billion instances or with sim-ulations that requir...
Stochastic variational inference finds good pos-terior approximations of probabilistic models with v...
Automatic decision making and pattern recognition under uncertainty are difficult tasks that are ubi...
Stochastic variational inference finds good posterior approximations of probabilistic models with ve...
Monte Carlo (MC) methods are widely used in signal pro-cessing, machine learning and communications ...
The aim of Probabilistic Programming (PP) is to automate inference in probabilistic models. One effi...
In this paper we present a method for automatically deriving a Reversible Jump Markov chain Monte Ca...
Motivated by the problem of amortized inference in large-scale simulators, we introduce a probabilis...
Improving efficiency of the importance sampler is at the centre of research on Monte Carlo methods. ...
Algorithms for exact and approximate inferencein stochastic logic programs (SLPs) are presented, bas...
Probabilistic modeling lets us infer, predict and make decisions based on incomplete or noisy data. ...
We describe a class of algorithms for amortized inference in Bayesian networks. In this setting, we ...
Probabilistic models used in quantitative sciences have historically co-evolved with methods for per...
Abstract. When dealing with datasets containing a billion instances or with sim-ulations that requir...
We present a new semantics sensitive sampling algorithm for probabilistic pro-grams, which are “usua...
Abstract. When dealing with datasets containing a billion instances or with sim-ulations that requir...
Stochastic variational inference finds good pos-terior approximations of probabilistic models with v...
Automatic decision making and pattern recognition under uncertainty are difficult tasks that are ubi...
Stochastic variational inference finds good posterior approximations of probabilistic models with ve...
Monte Carlo (MC) methods are widely used in signal pro-cessing, machine learning and communications ...
The aim of Probabilistic Programming (PP) is to automate inference in probabilistic models. One effi...
In this paper we present a method for automatically deriving a Reversible Jump Markov chain Monte Ca...
Motivated by the problem of amortized inference in large-scale simulators, we introduce a probabilis...