The minimum χ2−divergence principle states:When a prior probability density function of X, g(x), which estimates the underlying probability density function f(x) is given in addition to some constraints, then among all the density functions f(x) which satisfy the given constraints we should select that proba-bility density function which minimizes the χ2−divergence. We study minimun χ2−divergence probability distributions based on this principle given a prior beta distribution and the partial information on moments
We introduce a one parameter probability model bounded on (0, 1) supportcalled One Parameter Minimax...
An extended form of beta distribution by Al-Saleh and Agarwal, is further extended which has an addi...
AbstractSummarizing the whole support of a random variable into minimum volume sets of its probabili...
The minimum 2divergence principle [1] states:When a prior probability density function of X, g(x); w...
The minimum 2 −divergence principle and its application to characterize the continuous probability d...
The minimum discrimination information principle for the Kullback-Leibler cross-entropy is well know...
<p>Probability density functions for beta distributions with (left) and (right).</p
In this paper, some closed form expressions for selected parameters for the probability density func...
AbstractIn recent papers, Johnson and Kotz (Amer. Statist.44, 245-249 (1990); Math. Sci.15, 42-52 (1...
Given an exponential distribution g(x) and the information in terms of moments of the random variabl...
This research is an extension of classical statistics distribution theory as the theory did not deal...
In recent papers, Johnson and Kotz (Amer. Statist. 44, 245-249 (1990); Math. Sci. 15, 42-52 (1990)) ...
Probability density functions for beta distributions with various shape parameters.</p
Some well known results on the bivariate beta distribution have been reviewed. Corrected product mom...
The distributions taken up in two recently published papers are compared and certain characterizatio...
We introduce a one parameter probability model bounded on (0, 1) supportcalled One Parameter Minimax...
An extended form of beta distribution by Al-Saleh and Agarwal, is further extended which has an addi...
AbstractSummarizing the whole support of a random variable into minimum volume sets of its probabili...
The minimum 2divergence principle [1] states:When a prior probability density function of X, g(x); w...
The minimum 2 −divergence principle and its application to characterize the continuous probability d...
The minimum discrimination information principle for the Kullback-Leibler cross-entropy is well know...
<p>Probability density functions for beta distributions with (left) and (right).</p
In this paper, some closed form expressions for selected parameters for the probability density func...
AbstractIn recent papers, Johnson and Kotz (Amer. Statist.44, 245-249 (1990); Math. Sci.15, 42-52 (1...
Given an exponential distribution g(x) and the information in terms of moments of the random variabl...
This research is an extension of classical statistics distribution theory as the theory did not deal...
In recent papers, Johnson and Kotz (Amer. Statist. 44, 245-249 (1990); Math. Sci. 15, 42-52 (1990)) ...
Probability density functions for beta distributions with various shape parameters.</p
Some well known results on the bivariate beta distribution have been reviewed. Corrected product mom...
The distributions taken up in two recently published papers are compared and certain characterizatio...
We introduce a one parameter probability model bounded on (0, 1) supportcalled One Parameter Minimax...
An extended form of beta distribution by Al-Saleh and Agarwal, is further extended which has an addi...
AbstractSummarizing the whole support of a random variable into minimum volume sets of its probabili...