Solution To Exercise 4.4: Econometric Theory and Methods
Solution To Exercise 4.4: Econometric Theory and Methods
2
. Thus x
1
and x
2
are bivariate
normal with mean zero and covariance matrix
_
2
1
1
2
2
2
_
.
At this point, the fastest way to proceed is to note that = AA
, with
A =
_
1
(1
2
)
1/2
1
0
2
_
.
It follows that x
1
and x
2
can be expressed in terms of two independent stan-
dard normal variables, z
1
and z
2
, as follows:
x
1
=
1
(1
2
)
1/2
z
1
+
1
z
2
, and x
2
=
2
z
2
,
from which we nd that x
1
=
1
(1
2
)
1/2
z
1
+(
1
/
2
)x
2
, where z
1
and x
2
are independent, since x
2
depends only on z
2
, which is independent of z
1
by
construction. Thus E(z
1
| x
2
) = 0, and it is then immediate that E(x
1
| x
2
) =
(
1
/
2
)x
2
. For the conditional variance, note that
x
1
E(x
1
| x
2
) =
1
(1
2
)
1/2
z
1
,
of which the variance is
2
1
(1
2
), as required.
If the means of x
1
and x
2
are
1
and
2
, respectively, then we have
x
1
=
1
+
1
(1
2
)
1/2
z
1
+
1
z
2
, and x
2
=
2
+
2
z
2
, (S4.01)
where z
1
and z
2
are still independent standard normal variables. Since z
2
=
(x
2
2
)/
2
, we nd that
E(x
1
| x
2
) =
1
+(
1
/
2
)(x
2
2
). (S4.02)
Copyright c 2003, Russell Davidson and James G. MacKinnon
Econometric Theory and Methods Answers to Starred Exercises 8
It remains true that x
1
E(x
1
| x
2
) =
1
(1
2
)
1/2
z
1
, and so the conditional
variance is unchanged.
The factorization of as AA
1
x is distributed as
2
(2), and can thus be expressed as the sum
of the squares of two independent standard normal variables. By x we mean
the vector with x
1
and x
2
as the only two components. By brute force, one
can compute that
1
=
1
2
1
2
2
(1
2
)
_
2
2
1
2
2
1
_
,
so that
x
1
x =
1
2
1
2
2
(1
2
)
_
2
2
x
2
1
2
1
2
x
1
x
2
+
2
1
x
2
2
_
.
By the operation of completing the square, the right-hand side of this equa-
tion becomes
1
2
1
(1
2
)
_
_
x
1
(
1
/
2
)x
2
_
2
+x
2
2
_
2
1
/
2
2
2
1
/
2
2
_
_
=
_
x
1
(
1
/
2
)x
2
_
2
1
(1
2
)
_
1/2
_
2
+
_
x
2
2
_
2
.
The two variables of which the squares are summed here can now be identied
with z
1
and z
2
of (S4.01), and the rest of the argument is as above.
Since the above calculations are all either tricky or heavy, it is convenient
to have a simpler way to rederive them when needed. This can be done by
making use of the fact that, from (S4.02), the conditional expectation of x
1
is
linear (more strictly, ane) with respect to x
2
. Therefore, consider the linear
regression
x
1
= +x
2
+u. (S4.03)
If both x
1
and x
2
have mean 0, it is obvious that = 0. The population
analog of the standard formula for the OLS estimator of tells us that
=
Cov(x
1
, x
2
)
Var(x
2
)
=
1
2
2
=
1
2
.
Thus we see that
x
1
=
1
2
x
2
+u. (S4.04)
It is then immediate that E(x
1
| x
2
) = (
1
/
2
)x
2
. The conditional variance
follows by the same argument as above, and it can also be seen to be simply
the variance of u in (S4.04).
Copyright c 2003, Russell Davidson and James G. MacKinnon