0% found this document useful (0 votes)
71 views

Exercise Bayesian

This document provides an exercise on Bayesian statistics involving determining: (1) The posterior distribution of σ given the data, which is a gamma distribution. (2) The Bayes estimator of σ under squared loss, which is the posterior mean. (3) The maximum likelihood estimator (MLE) of σ, which converges in probability to the true value of σ as the sample size increases, implying the Bayes estimator also converges to the true value.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views

Exercise Bayesian

This document provides an exercise on Bayesian statistics involving determining: (1) The posterior distribution of σ given the data, which is a gamma distribution. (2) The Bayes estimator of σ under squared loss, which is the posterior mean. (3) The maximum likelihood estimator (MLE) of σ, which converges in probability to the true value of σ as the sample size increases, implying the Bayes estimator also converges to the true value.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Advanced Mathematics and Statistics

Module 2 - Advanced Statistical Methods


An exercise on Bayesian statistics

iid
Exercise. Let X1 , . . . , Xn |σ ∼ f ( · |σ), where
r
1 σ n σ o
f (x|σ) = exp − (log(x))2 , x>0
x 2π 2

and σ is a positive quantity, whose prior distribution over R+ is a gamma with shape–rate
parameters (1, 2), i.e. p(σ) = 2e−2σ for σ > 0.

(a) Determine the posterior distribution of σ, given X1 = x1 , . . . , Xn = xn .

(b) Determine the Bayes estimator σ̂p of σ under a squared loss function.
iid
(c) Assuming σ fixed, i.e., X1 , . . . , Xn ∼ f ( · |σ), determine the MLE σ̂ of σ and show
that σ̂p /σ̂ → 1 in probability as n → +∞.

1
Solution
(a) Recall that the likelihood function of the data is
n
( )
1 n/2 σX
f (x1 , . . . , xn |σ) = Qn (σ/(2π)) exp − log(xi )2 .
i=1 xi 2
i=1

We may apply the Bayes theorem to determine the posterior:


p(σ|x1 , . . . , xn ) ∝ f (x1 , . . . , xn |σ) · p(σ)
n
( )
 X 
n/2 −1 2
∝ σ exp −σ 2 log(xi ) + 2
i=1

therefore
n
!
1X 2
σ̃|X1 = x1 , . . . , Xn = xn ∼ Gamma n/2 + 1, log(xi ) + 2 .
2
i=1

(b) The Bayes estimator of σ under a squared loss function is the posterior mean:
Z ∞
n+2
σ̂p = σp(σ|x1 , . . . , xn )dσ = Pn
0 4 + i=1 log(xi )2
where we used the fact that the mean of a gamma with parameters (a, b) equals a/b.
(c) Now we have to maximize the likelihood function:
n
( )
1 n/2 σX
L(σ) = f (x1 , . . . , xn |σ) = Qn (σ/(2π)) exp − log(xi )2 .
i=1 xi 2
i=1

For simplicity, we consider the log-likelihood function


n n
X n σX
ℓ(σ) = log(L(σ)) = − log(xi ) + log(σ/(2π)) − log(xi )2 .
2 2
i=1 i=1

It is easy to see that


∂ n
ℓ(σ) ≥ 0 iff σ ≤ Pn 2
∂σ i=1 log(xi )
therefore
n
σ̂ = Pn 2
i=1 log(Xi )
is the MLE of σ. In order to prove the convergence in probability, we use the consistency
p
of the MLE, indeed one has σ̂ −→ σ as n → +∞. As a consequence we obtain:
σ̂p n+2 1 n+2 1 p σ
= Pn 2
· = · −→ = 1
σ̂ 4 + i=1 log(Xi ) σ̂ 4 + n/σ̂ σ̂ σ
as n → +∞.

You might also like