A standard method for approximating averages in probabilistic models is to construct a Markov chain in the product space of the random variables with the desired equilibrium distribution. Since the number of configurations in this space grows exponentially with the number of random variables we often need to represent the distribution with samples. In this paper we show that if one is interested in averages over single variables only, an alternative Markov chain defined on the much smaller "union space", which can be evolved exactly, becomes feasible. The transition kernel of this Markov chain is based on conditional distributions for pairs of variables and we present ways to approximate them using approximate inference algorithms such as m...
In this work we discuss approximative techniques for the analysis of Markov chains, namely, state sp...
In this thesis, we give a new class of outer bounds on the marginal polytope, and propose a cutting-...
We propose a cutting-plane style algorithm for finding the maximum a posteriori (MAP) state and appr...
A standard method for approximating averages in probabilistic models is to construct a Markov chain ...
Abstract. The goal of this work is to formally abstract a Markov pro-cess evolving over a general st...
Abstract. The goal of this work is to formally abstract a Markov pro-cess evolving over a general st...
In this thesis, we use a mean squared error energy approximation for edge deletion in order to make ...
The goal of this work is to formally abstract a Markov process evolving in discrete time over a gene...
The goal of this work is to formally abstract a Markov process evolving in discrete time over a gene...
We explore formal approximation techniques for Markov chains based on state–space reduction t...
Monte Carlo algorithms often aim to draw from a distribution \ensuremathπ by simulating a Markov cha...
We present two techniques for constructing sample spaces that approximate probability distributions....
Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with tra...
We develop a new notion of approximation of labelled Markov processes based on the use of condition...
A new approach to inference in state space models is proposed, based on approximate Bayesian computa...
In this work we discuss approximative techniques for the analysis of Markov chains, namely, state sp...
In this thesis, we give a new class of outer bounds on the marginal polytope, and propose a cutting-...
We propose a cutting-plane style algorithm for finding the maximum a posteriori (MAP) state and appr...
A standard method for approximating averages in probabilistic models is to construct a Markov chain ...
Abstract. The goal of this work is to formally abstract a Markov pro-cess evolving over a general st...
Abstract. The goal of this work is to formally abstract a Markov pro-cess evolving over a general st...
In this thesis, we use a mean squared error energy approximation for edge deletion in order to make ...
The goal of this work is to formally abstract a Markov process evolving in discrete time over a gene...
The goal of this work is to formally abstract a Markov process evolving in discrete time over a gene...
We explore formal approximation techniques for Markov chains based on state–space reduction t...
Monte Carlo algorithms often aim to draw from a distribution \ensuremathπ by simulating a Markov cha...
We present two techniques for constructing sample spaces that approximate probability distributions....
Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with tra...
We develop a new notion of approximation of labelled Markov processes based on the use of condition...
A new approach to inference in state space models is proposed, based on approximate Bayesian computa...
In this work we discuss approximative techniques for the analysis of Markov chains, namely, state sp...
In this thesis, we give a new class of outer bounds on the marginal polytope, and propose a cutting-...
We propose a cutting-plane style algorithm for finding the maximum a posteriori (MAP) state and appr...