Same max entropy for diffirent priors
up vote
0
down vote
favorite
For a continuous distribution,$f$, we define the
entropy with respect to a reference prior $f_{0}$ to be $$epsilon(f)=int log(frac{f(theta)}{f_{0}(theta)})f_{0} dtheta$$
For Lebesgue measure as the reference prior,
If we know $E[theta]=mu$ , $Var[theta]=sigma^{2}$
then
the maximum entropy prior could be (if we normalise)
$pi^{*}(theta)$ $alpha$ $exp(lambda_{1}theta+lambda_{2}theta^{2})$
where the $lambda$ are the Lagrange multipliers that we solve for using the constraints . When we solve using the constraints this implies it is the Normal$(mu,sigma^{2})$ distribution.
But what if the reference prior was not Lebesgue, and it was something else such as the standard normal.
Would we have
$pi^{*}(theta)$ $alpha$ $$exp(lambda_{1}theta+lambda_{2}theta^{2})exp(frac{-theta^{2}}{2})$$
Or would there by something missing?
statistics bayesian entropy
add a comment |
up vote
0
down vote
favorite
For a continuous distribution,$f$, we define the
entropy with respect to a reference prior $f_{0}$ to be $$epsilon(f)=int log(frac{f(theta)}{f_{0}(theta)})f_{0} dtheta$$
For Lebesgue measure as the reference prior,
If we know $E[theta]=mu$ , $Var[theta]=sigma^{2}$
then
the maximum entropy prior could be (if we normalise)
$pi^{*}(theta)$ $alpha$ $exp(lambda_{1}theta+lambda_{2}theta^{2})$
where the $lambda$ are the Lagrange multipliers that we solve for using the constraints . When we solve using the constraints this implies it is the Normal$(mu,sigma^{2})$ distribution.
But what if the reference prior was not Lebesgue, and it was something else such as the standard normal.
Would we have
$pi^{*}(theta)$ $alpha$ $$exp(lambda_{1}theta+lambda_{2}theta^{2})exp(frac{-theta^{2}}{2})$$
Or would there by something missing?
statistics bayesian entropy
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
For a continuous distribution,$f$, we define the
entropy with respect to a reference prior $f_{0}$ to be $$epsilon(f)=int log(frac{f(theta)}{f_{0}(theta)})f_{0} dtheta$$
For Lebesgue measure as the reference prior,
If we know $E[theta]=mu$ , $Var[theta]=sigma^{2}$
then
the maximum entropy prior could be (if we normalise)
$pi^{*}(theta)$ $alpha$ $exp(lambda_{1}theta+lambda_{2}theta^{2})$
where the $lambda$ are the Lagrange multipliers that we solve for using the constraints . When we solve using the constraints this implies it is the Normal$(mu,sigma^{2})$ distribution.
But what if the reference prior was not Lebesgue, and it was something else such as the standard normal.
Would we have
$pi^{*}(theta)$ $alpha$ $$exp(lambda_{1}theta+lambda_{2}theta^{2})exp(frac{-theta^{2}}{2})$$
Or would there by something missing?
statistics bayesian entropy
For a continuous distribution,$f$, we define the
entropy with respect to a reference prior $f_{0}$ to be $$epsilon(f)=int log(frac{f(theta)}{f_{0}(theta)})f_{0} dtheta$$
For Lebesgue measure as the reference prior,
If we know $E[theta]=mu$ , $Var[theta]=sigma^{2}$
then
the maximum entropy prior could be (if we normalise)
$pi^{*}(theta)$ $alpha$ $exp(lambda_{1}theta+lambda_{2}theta^{2})$
where the $lambda$ are the Lagrange multipliers that we solve for using the constraints . When we solve using the constraints this implies it is the Normal$(mu,sigma^{2})$ distribution.
But what if the reference prior was not Lebesgue, and it was something else such as the standard normal.
Would we have
$pi^{*}(theta)$ $alpha$ $$exp(lambda_{1}theta+lambda_{2}theta^{2})exp(frac{-theta^{2}}{2})$$
Or would there by something missing?
statistics bayesian entropy
statistics bayesian entropy
asked Nov 14 at 22:54
Learning
34
34
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2998931%2fsame-max-entropy-for-diffirent-priors%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown