0% found this document useful (0 votes)
13 views

Review-2024-04

Uploaded by

Sahana Rahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Review-2024-04

Uploaded by

Sahana Rahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Focus on conditioning = identify where should I condition, and where not to

MH2500 PROBABILITY AND INTRODUCTION TO STATISTICS

NANYANG TECHNOLOGICAL UNIVERSITY

Review Questions Part 4


..............................................................................................

These are review questions for Chapters 7 and 8 and Chapter 8 in Ross’ book.

(1) Let the joint probability mass function of X and Y be given by


1
(
(x + y) if x = 0, 1, 2, y = 1, 2;
p(x, y) = 15
0 otherwise.
Find pX|Y (x|y) and Pr{X = 0 | Y = 2}.

p(x, y)
Solution: To use pX|Y (x|y) = , we first need to calculate
pY (y)
2
X 1
pY (y) = p(x, y) = (1 + y)
5
x=0
and hence
p(x, y) (x + y)/15 x+y
pX|Y (x|y) = = = , x = 0, 1, 2, when y = 1, or 2.
pY (y) (1 + y)/5 3 + 3y
0+2 2
In particular, Pr{X = 0 | Y = 2} = = .
3+6 9
(2) First, a point Y is selected at random from the interval (0, 1). Then another point X is
chosen from the interval (0, Y ). Find the probability.density function of X.

Solution: Let f (x, y) be the joint probability density function of X and Y . Then
Z ∞
fX (x) = f (x, y) dy.
−∞
f (x, y)
By fX|Y (x|y) = , we have f (x, y) = fX|Y (x|y)fY (y). This gives
fY (y)
Z ∞
fX (x) = fX|Y (x|y)fY (y) dy.
−∞
Since Y is uniformly distributed over (0, 1),

 1 if 0 < y < 1;
fY (y) =
0 otherwise.

Also, given Y = y, X is uniformly distributed over (0, y), we have



 1/y if 0 < x < y < 1;
fX|Y (x|y) =
0 otherwise.

Thus,
Z ∞ Z 1 1
1
fX (x) = fX|Y (x|y)fY (y) dy = dy = ln y = − ln x.
−∞ x y x
1

 − ln x if 0 < x < 1;
fX (x) =
0 otherwise.

(3) Let the condictional probability density function of X, given that Y = y, be


x + y −x
fX|Y (x|y) = e , 0 < x, y < ∞.
1+y
Find Pr{X < 1|Y = 2}.

Solution: The probability density function of X given Y = 2 is


x + 2 −x
fX|Y (x|2) = e , 0 < x < ∞.
3
Thus,
Z 1
x + 2 −x
Pr{X < 1|Y = 2} = e dx
0 3
Z 1 Z 1 
1 −x −x
= xe dx + 2e dx
3 0 0

1 1 2  −x 1
= −xe−x − e−x 0 − e 0
3 3
4
= 1 − e−1 ≈ 0.509.
3
(4) Let X and Y be independent exponential random variables, each with parameter λ.
Find the distribution of X + Y .

Solution: Let fX and fY be the pdfs of X and Y respectively. Then


 λe−λx if x ≥ 0;

fX (x) =
0 otherwise.

and 
 λe−λ(t−x) if x ≤ t;
fY (t − x) =
0 otherwise.

The pdf for X + Y is given by


Z ∞ Z th i h i
fX+Y (t) = fX (x)fY (t − x) dx = λe−λx · λe−λ(t−x) dx = λ2 te−λt ,
−∞ 0

which is the pdf for the Gamma distribution Γ(2, λ). Hence X + Y ∼ Γ(2, λ).

Remark: Yes, we know this from the fact: If X ∼ Γ(α, λ) and Y ∼ Γ(β, λ), then
X + Y ∼ Γ(α + β, λ). The statement in this question is the special case when α = β = 1.

(5) What is the expected number of random digits that should be generated to obtain three
consecutive zeros.

Solution: Let X be the number of random digits to be generated until three consec-
tutive zeros are obtained. Let Y be the number of random digits to be generated until
the first nonzero digit is obtained. Then, by the law of totaly expectation,
2

X Since Y is the first non-zero digit,
E(X) = E[E(X|Y )] = E[X |Y = i]Pr{Y = i} if Y > 4 then the problem resets
i=1
and we need exactly 3 digits to
For i from 1 to 3, the problem restarts, as there isn’t enough 0s to generate make a trio
a trio, similar
3 to the miner’s problem ∞
X X
= E[X |Y = i]Pr{Y = i} + E[X |Y = i]Pr{Y = i}
i=1 i=4

3  i−1   ∞  i−1  
X 1 9 X 1 9
= [i + E(X)] + 3 ,
10 10 10 10
i=1 i=4
which gives
E(X) = 1.107 + 0.999 E(X) + 0.003.
similar to the arrival
Solving this for E(X), we have E(X) = 1110.
problem in the lecture
notes - using (6) A fisherman catches fish in a large lake with losts of fish, at a Poisson rate of two per
conditional hour. If, on a given day, the fisherman spends on randomly anywhere between 3 and 8
expectations hours fishing, find the expected value and the variance of the number of fish he catches.
Note to self: When
dealing with
Solution: Let X be the number of hours the fisherman spends on fishing. Then X is a
expectionas uniform random variable over the interval (3, 8). Label the time of the fisherman begins
stick to the fishing on the given day at t = 0. Let N (t) denote the total number of fish caught at or
expectaitons - there prior to t. Then {N (t) : t ≥ 0} os a Poisson process with parameter λ = 2. Assuming
should that X is independent of {N (t) : t ≥ 0}, we have
prolly be a way to just
do that E[N (X) | X = t] = E[N (t)] = 2t.
instead of overthinking
This implies that
E[N (X) | X] = 2X.
Therefore,
3+8
E[N (X)] = E[E[N (X) | X]] = E[2X] = 2 E(X) = 2 · = 11.
2
Similarly,
Var[N (X) | X = t] = Var(N (t)) = 2t,
Var[N (X) | X] = 2X.
Thus
Var(N (X)) = E[Var(N (X)|X)] + Var(E(N (X)|X)) = E(2X) + Var(2X)
3+8 (8 − 3)2
= 2 E(X) + 4 Var(X) = 2 · +4· = 19.33.
2 12
(7) Let X and Y be two random variables and f be a real-varlued function on R. Prove
that
E[f (Y )X|Y ] = f (Y ) E(X|Y ).

Proof: If Y = y, then f (Y ) E(X|Y ) is f (y) E(X | Y = y). We show that E[f (Y )X | Y ]


is equal to this. Let fX|Y (x|y) be the conditional pdf of X given that Y = y. Then
Z ∞
E[f (Y )X | Y = y] = E[f (y)X | Y = y] = f (y)x · fX|Y (x|y) dx
−∞
Z ∞
= f (y) x · fX|Y (x|y) dx = f (y) E(X | Y = y).
−∞
This gives the wanted identity E[f (Y )X|Y ] = f (Y ) E(X|Y ).
3
(8) Let X be a random variable and k be a positive constant. Prove that
E[ekX ]
Pr{X > t} ≤
ekt
Proof:
when there is no obvious As k > 0,
summing or sample size E[ekX ]
it’s obviously NOT CLT Pr{X > t} = Pr{kX > kt} = Pr{ekX > ekt } ≤ ,
ekt
equal distances where the last step is by Markov’s inequality.
om the mean implies Chebsoky inequalty
(9) Suppose that, on average, a post office handles 10,000 letters a day with a variance of
2000. What can be said about the probability that this post office will handle between
8000 and 12,000 letters tomorrow.

Solution: Let X denote the number of letters that this post office will handle tomorrow.
Then
µ = E(X) = 10, 000, σ 2 = Var(X) = 2000.

We want to calculate Pr{8000 < X < 12, 000}. Here it is:

Pr{8000 < X < 12, 000} = Pr{−2000 < X − 10, 000 < 2, 000}

= Pr{|X − 10, 000| < 2, 000}

= 1 − Pr{|X − 10, 000| ≥ 2, 000}


By Chebyshev’s inequality,
2000 1
Pr{|X − 10, 000| ≥ 2, 000} ≤ = .
(2000)2 2000
Hence,
1
Pr{8000 < X < 12, 000} = 1 − Pr{|X − 10, 000| ≥ 2, 000} ≥ 1 − = 0.9995.
2000
Note that this answer is consistent with our intuitive understanding of the concepts of
expectation and variance.

(10) A blind will fit Laura’s bedroom’s window if its width is between 41.5 and 42.5 inches.
Laura buys a blind from a store that has 30 such blinds. What can be said about the
probability that it fits her window if the average of the widths of the blinds is 42 inches
with standard deviation 0.25?

Solution: Let X be the width of the blink that Laura purchased. We have
1
Pr{|X − µ| < kσ} ≥ 1 − .
k2
Thus NOT A VERY GOOD LOWER BOUND, since ti almost will perftly fit
 2
1
Pr{|X − 42| < 2 · (0.25)} ≥ 1 − = 0.75.
2

(11) The lifetime of a TV tube (inyears) is an exponential random variable with mean 10.
What is the probability that the average lifetime of a random sample of 36 TV tubes is
at least 10.5 years?

4
Soltution: The parameter of the exponential density function of the lifetime of a tube
is λ = 1/10. For 1 ≤ i ≤ 36, let Xi be the lifetime of the i-th TV tube in the sample.
Clearly, for each i, E(Xi ) = 1/λ = 20 and σXi = 1/λ = 10. By the Central Limit
Theorem,
 
X − 10 10.5 − 10
Pr{X > 10.5} = Pr √ > √ = Pr{Z > 0.30} ≈ 1−Φ(0.30) = 1−0.6179 = 0.3821.
10 36 10 36
Here Z is the standard normal distribution.

(12) A biologist wants to estimate `, the life expectancy of a certain type of insect. To do so,
he takes a sample of size n and measures the lifetime from birth to death of each insect.
Then he finds the average of these numbers. If he believes that the lifetimes of these
insects are independent random variables with variance 1.5 days, how large a sample
should he choose to be 98% sure that his average is accurate within ±0.2 (±4.8 hours)?

Solution: For i ≤ n, let Xi be the lieftime of the i-th insect of the sample. We want to
determine n so that we are saying that it is accurate
  between - so we are already talking about
· · ·differnce
X1 +the + Xn between the mean estimated and the actual mean
Pr −0.2 < − ` < 0.2 ≈ 0.98.
n
As E(Xi ) = ` and Var(Xi ) = 1.5, by the Central Limit Theorem,
n
  ( )
X1 + · · · + Xn X
Pr −0.2 < − ` < 0.2 = Pr (−0.2)n < Xi − n` < (0.2)n
n
i=1
which is equal to

X n 
Xi − n`
 
 √ 

 


 −(0.2)n     
i=1 (0.2)n  0.2n −0.2n 0.2 n
Pr √ < √ <√ ≈Φ √ −Φ √ = 2Φ √ − 1.


 1.5n 1.5n 1.5n 

 1.5n 1.5n 1.5

 

 √ 
0.2 n
Thus, the quantity 2Φ √ − 1 should approximately equal to 0.98. That is,
 √  1.5 √
0.2 n 0.2 n
Φ √ ≈ 0.99. This shows that √ ≈ 2.33 (from the table for standard normal
1.5 1.5
variable), which gives n = 203.58. This shows that the biologist should choose a sample
of size 204.

You might also like