Solutions of Selected Problems from Probability
Essentials, Second Edition
Solutions to selected problems of Chapter 2
2.1 Let’s first prove by induction that #(2Ωn) = 2n if Ω = {x1, . . . , xn}. For n = 1
it is clear that #(2Ω1) = #({∅, {x1}}) = 2. Suppose #(2Ωn−1) = 2n−1. Observe that
2Ωn = {{xn} ∪ A,A ∈ 2Ωn−1} ∪ 2Ωn−1} hence #(2Ωn) = 2#(2Ωn−1) = 2n. This proves
finiteness. To show that 2Ω is a σ-algebra we check:
1. ∅ ⊂ Ω hence ∅ ∈ 2Ω.
2. If A ∈ 2Ω then A ⊂ Ω and Ac ⊂ Ω hence Ac ∈ 2Ω.
3. Let (An)n≥1 be a sequence of subsets of Ω. Then
⋃∞
n=1An is also a subset of Ω hence
in 2Ω.
Therefore 2Ω is a σ-algebra.
2.2 We check if H = ∩α∈AGα has the three properties of a σ-algebra:
1. ∅ ∈ Gα ∀α ∈ A hence ∅ ∈ ∩α∈AGα.
2. If B ∈ ∩α∈AGα then B ∈ Gα ∀α ∈ A. This implies that Bc ∈ Gα ∀α ∈ A since each
Gα is a σ-algebra. So Bc ∈ ∩α∈AGα.
3. Let (An)n≥1 be a sequence in H. Since each An ∈ Gα,
⋃∞
n=1An is in Gα since Gα is a
σ-algebra for each α ∈ A. Hence ⋃∞n=1An ∈ ∩α∈AGα.
Therefore H = ∩α∈AGα is a σ-algebra.
2.3 a. Let x ∈ (∪∞n=1An)c. Then x ∈ Acn for all n, hence x ∈ ∩∞n=1Acn. So (∪∞n=1An)c ⊂
∩∞n=1Acn. Similarly if x ∈ ∩∞n=1Acn then x ∈ Acn for any n hence x ∈ (∪∞n=1An)c. So
(∪∞n=1An)c = ∩∞n=1Acn.
b. By part-a ∩∞n=1An = (∪∞n=1Acn)c, hence (∩∞n=1An)c = ∪∞n=1Acn.
2.4 lim infn→∞An = ∪∞n=1Bn where Bn = ∩m≥nAm ∈ A ∀n since A is closed under taking
countable intersections. Therefore lim infn→∞An ∈ A since A is closed under taking
countable unions.
By De Morgan’s Law it is easy to see that lim supAn = (lim infn→∞Acn)
c, hence lim supn→∞An ∈
A since lim infn→∞Acn ∈ A and A is closed under taking complements.
Note that x ∈ lim infn→∞An ⇒ ∃n∗ s.t x ∈ ∩m≥n∗Am ⇒ x ∈ ∩m≥nAm∀n ⇒ x ∈
lim supn→∞An. Therefore lim infn→∞An ⊂ lim supn→∞An.
2.8 Let L = {B ⊂ R : f−1(B) ∈ B}. It is easy to check that L is a σ-algebra. Since f
is continuous f−1(B) is open (hence Borel) if B is open. Therefore L contains the open
sets which implies L ⊃ B since B is generated by the open sets of R. This proves that
f−1(B) ∈ B if B ∈ B and that A = {A ⊂ R : ∃B ∈ B with A = f−1(B) ∈ B} ⊂ B.
1
Solutions to selected problems of Chapter 3
3.7 a. Since P (B) > 0 P (.|B) defines a probability measure on A, therefore by Theorem
2.4 limn→∞ P (An|B) = P (A|B).
b. We have that A ∩ Bn → A ∩ B since 1A∩Bn(w) = 1A(w)1Bn(w) → 1A(w)1B(w).
Hence P (A ∩Bn) → P (A ∩B). Also P (Bn) → P (B). Hence
P (A|Bn) = P (A ∩Bn)
P (Bn)
→ P (A ∩B)
P (B)
= P (A|B).
c.
P (An|Bn) = P (An ∩Bn)
P (Bn)
→ P (A ∩B)
P (B)
= P (A|B)
since An ∩Bn → A ∩B and Bn → B.
3.11 Let B = {x1, x2, . . . , xb} and R = {y1, y2, . . . , yr} be the sets of b blue balls and r
red balls respectively. Let B′ = {xb+1, xb+2, . . . , xb+d} and R′ = {yr+1, yr+2, . . . , yr+d} be
the sets of d-new blue balls and d-new red balls respectively. Then we can write down the
sample space Ω as
Ω = {(a, b) : (a ∈ B and b ∈ B ∪B′ ∪R) or (a ∈ R and b ∈ R ∪R′ ∪B)}.
Clearly card(Ω) = b(b + d + r) + r(b + d + r) = (b + r)(b + d + r). Now we can define a
probability measure P on 2Ω by
P (A) =
card(A)
card(Ω)
.
a. Let
A = { second ball drawn is blue}
= {(a, b) : a ∈ B, b ∈ B ∪B′} ∪ {(a, b) : a ∈ R, b ∈ B}
card(A) = b(b + d) + rb = b(b + d + r), hence P (A) = b
b+r
.
b. Let
B = { first ball drawn is blue}
= {(a, b) ∈ Ω : a ∈ B}
Observe A ∩B = {(a, b) : a ∈ B, b ∈ B ∪B′} and card(A ∩B) = b(b + d). Hence
P (B|A) = P (A ∩B)
P (A)
=
card(A ∩B)
card(A)
=
b + d
b + d + r
.
2
3.17 We will use the inequality 1−x > e−x for x > 0, which is obtained by taking Taylor’s
expansion of e−x around 0.
P ((A1 ∪ . . . ∪ An)c) = P (Ac1 ∩ . . . ∩ Acn)
= (1− P (A1)) . . . (1− P (An))
≤ exp(−P (A1)) . . . exp(−P (An)) = exp(−
n∑
i=1
P (Ai))
3
Solutions to selected problems of Chapter 4
4.1 Observe that
P (k successes) =
(
n
2
)
λ
n
k (
1− λ
n
)n−k
= Canb1,n . . . bk,ndn
where
C =
λk
k!
an = (1− λ
n
)n bj,n =
n− j + 1
n
dn = (1− λ
n
)−k
It is clear that bj,n → 1 ∀j and dn → 1 as n→∞. Observe that
log((1− λ
n
)n) = n(
λ
n
− λ
2
n2
1
ξ2
) for some ξ ∈ (1− λ
n
, 1)
by Taylor series expansion of log(x) around 1. It follows that an → e−λ as n → ∞ and
that
|Error| = |en log(1−λn ) − e−λ| ≥ |n log(1− λ
n
)− λ| = nλ
2
n2
1
ξ2
≥ λp
Hence in order to have a good approximation we need n large and p small as well as λ to
be of moderate size.
4
Solutions to selected problems of Chapter 5
5.7 We put xn = P (X is even) for X ∼ B(p, n). Let us prove by induction that xn =
1
2
(1 + (1− 2p)n). For n = 1, x1 = 1− p = 12(1 + (1− 2p)1). Assume the formula is true for
n− 1. If we condition on the outcome of the first trial we can write
xn = p(1− xn−1) + (1− p)xn
= p(1− 1
2
(1 + (1− 2p)n−1)) + (1− p)(1
2
(1 + (1− 2p)n−1))
=
1
2
(1 + (1− 2p)n)
hence we have the result.
5.11 Observe that E(|X − λ|) = ∑i<λ(λ − i)pi +∑i≥λ(i − λ)pi. Since ∑i≥λ(i − λ)pi =∑∞
i=0(i− λ)pi −
∑
i<λ(i− λ)pi we have that E(|X − λ|) = 2
∑
i<λ(λ− i)pi. So
E(|X − λ|) = 2
∑
i<λ
(λ− i)pi
= 2
λ−1∑
i=1
(λ− i)e
−λλk
k!
= 2e−λ
λ−1∑
i=0
(
λk+1
k!
− λ
k
(k − 1)!)
= 2e−λ
λλ
(k − 1)! .
5
Solutions to selected problems of Chapter 7
7.1 Suppose limn→∞ P (An) 6= 0. Then there exists � > 0 such that there are dis-
tinct An1 , An2 , . . . with P (Ank) > 0 for every k ≤ 1. This gives
∑∞
k=1 P (Ank) = ∞
which is a contradiction since by the hypothesis that the An are disjoint we have that∑∞
k=1 P (Ank) = P (∪∞n=1Ank) ≤ 1 .
7.2 Let An = {Aβ : P (Aβ) > 1/n}. An is a finite set otherwise we can pick disjoint
Aβ1 , Aβ2 , . . . in An. This would give us P ∪∞m=1 Aβm =
∑∞
m=1 P (Aβm) = ∞ which is a
contradiction. Now {Aβ : β ∈ B} = ∪∞n=1An hence (Aβ)β∈B is countable since it is a
countable union of finite sets.
7.11 Note that {x0} = ∩∞n=1[x0 − 1/n, x0] therefore {x0} is a Borel set. P ({x0}) =
limn→∞ P ([x0 − 1/n, x0]). Assuming that f is continuous we have that f is bounded
by some M on the interval [x0 − 1/n, x0] hence P ({x0}) = limn→∞M(1/n) = 0.
Remark: In order this result to be true we don’t need f to be continuous. When we define
the Lebesgue integral (or more generally integral with respect to a measure) and study its
properties we will see that this result is true for all Borel measurable non-negative f .
7.16 First observe that F (x) − F (x−) > 0 iff P ({x}) > 0. The family of events {{x} :
P ({x}) > 0} can be at most countable as we have proven in problem 7.2 since these events
are disjoint and have positive probability. Hence F can have at most countable discon-
tinuities. For an example with infinitely many jump discontinuities consider the Poisson
distribution.
7.18 Let F be as given. It is clear that F is a nondecreasing function. For x < 0 and x ≥ 1
right continuity of F is clear. For any 0 < x < 1 let i∗ be such that 1
i∗+1 ≤ x < 1i∗ . If
xn ↓ x then there exists N such that 1i∗+1 ≤ xn < 1i∗ for every n ≥ N . Hence F (xn) = F (x)
for every n ≥ N which implies that F is right continuous at x. For x = 0 we have that
F (0) = 0. Note that for any � there exists N such that
∑∞
i=N
1
2i
< �. So for all x s.t.
|x| ≤ 1
N
we have that F (x) ≤ �. Hence F (0+) = 0. This proves the right continuity of F
for all x. We also have that F (∞) = ∑∞i=1 12i = 1 and F (−∞) = 0 so F is a distribution
function of a probability on R.
a. P ([1,∞)) = F (∞)− F (1−) = 1−∑∞n=2 = 1− 12 = 12 .
b. P ([ 1
10
,∞)) = F (∞)− F ( 1
10
−) = 1−∑∞n=11 12i = 1− 2−10.
c P ({0}) = F (0)− F (0−) = 0.
d. P ([0, 1
2
)) = F (1
2
−)− F (0−) = ∑∞n=3 12i − 0 = 14 .
e. P ((−∞, 0)) = F (0−) = 0.
f. P ((0,∞)) = 1− F (0) = 1.
6
7
Solutions to selected problems of Chapter 9
9.1 It is clear by the definition of F that X−1(B) ∈ F for every B ∈ B. So X is measurable
from (Ω,F) to (R,B).
9.2 Since X is both F and G measurable for any B ∈ B, P (X ∈ B) = P (X ∈ B)P (X ∈
B) = 0 or 1. Without loss of generality we can assume that there exists a closed interval
I such that P (I) = 1. Let Λn = {tn0 , . . . tnln} be a partition of I such that Λn ⊂ Λn+1 and
supk t
n
k − tnk−1 → 0. For each n there exists k∗(n) such that P (X ∈ [tnk∗ , tnk∗+1]) = 1 and
[tnk∗(n+1, t
n
k∗(n+1)+1] ⊂ [tnk∗(n), tnk∗(n)+1]. Now an = tnk∗(n) and bn = tnk∗(n) + 1 are both Cauchy
sequences with a common limit c. So 1 = limn→∞ P (X ∈ (tnk∗ , tnk∗+1]) = P (X = c).
9.3 X−1(A) = (Y −1(A) ∩ (Y −1(A) ∩X−1(A)c)c)∪(X−1(A) ∩ Y −1(A)c). Observe that both
Y −1(A)∩ (X−1(A))c and X−1(A)∩Y −1(A)c are null sets and therefore measurable. Hence
if Y −1(A) ∈ A′ then X−1(A) ∈ A′. In other words if Y is A′ measurable so is X.
9.4 Since X is integrable, for any � > 0 there exists M such that
∫ |X|1{X>M}dP < � by
the dominated convergence theorem. Note that
E[X1An ] = E[X1An1{X>M}] + E[X1An1{X≤M}]
≤ E[|X|1{X≤M}] + MP (An)
Since P (An) → 0, there exists N such that P (An) ≤ �M for every n ≥ N . Therefore
E[X1An ] ≤ � + � ∀n ≥ N , i.e. limn→∞E[X1An ] = 0.
9.5 It is clear that 0 ≤ Q(A) ≤ 1 and Q(Ω) = 1 since X is nonnegative and E[X] = 1. Let
A1, A2, . . . be disjoint. Then
Q(∪∞n=1An) = E[X1∪∞n=1An ] = E[
∑
n=1
X1An ] =
∞∑
n=1
E[X1An ]
where the last equality follows from the monotone convergence theorem. Hence Q(∪∞n=1An) =∑∞
n=1Q(An). Therefore Q is a probability measure.
9.6 If P (A) = 0 then X1A = 0 a.s. Hence Q(A) = E[X1A] = 0. Now assume P is the
uniform distribution on [0, 1]. Let X(x) = 21[0,1/2](x). Corresponding measure Q assigns
zero measure to (1/2, 1], however P ((1/2, 1]) = 1/2 6= 0.
9.7 Let’s prove this first for simple functions, i.e. let Y be of the form
Y =
n∑
i=1
ci1Ai
8
for disjoint A1, . . . , An. Then
EQ[Y ] =
n∑
i=1
ciQ(Ai) =
n∑
i=1
ciE[X1Ai ] = EP [XY ]
For non-negative Y we take a sequence of simple functions Yn ↑ Y . Then
EQ[Y ] = lim
n→∞
EQ[Yn] = lim
n→∞
EP [XYn] = EP [XY ]
where the last equality follows from the monotone convergence theorem. For general Y ∈
L1(Q) we have that EQ[Y ] = EQ[Y
+]− EQ[Y −] = EP [(XY )+]− EQ[(XY )−] = EP [XY ].
9.8 a. Note that 1
X
X = 1 a.s. since P (X > 0) = 1. By problem 9.7 EQ[
1
X
] = EP [
1
X
X] = 1.
So 1
X
is Q-integrable.
b. R : A → R, R(A) = EQ[ 1X1A] is a probability measure since 1X is non-negative and
EQ[
1
X
] = 1. Also R(A) = EQ[
1
X
1A] = EP [
1
X
X1A] = P (A). So R = P .
9.9 Since P (A) = EQ[
1
X
1A] we have that Q(A) = 0 ⇒ P (A) = 0. Now combining the
results of the previous problems we can easily observe that Q(A) = 0 ⇔ P (A) = 0 iff
P (X > 0) = 1.
9.17. Let
g(x) =
((x− µ)b + σ)2
σ2(1 + b2)2
.
Observe that {X ≥ µ + bσ} ∈ {g(X) ≥ 1}. So
P ({X ≥ µ + bσ}) ≤ P ({g(X) ≥ 1}) ≤ E[g(X)]
1
where the last inequality follows from Markov’s inequality. Since E[g(X)] = σ
2(1+b2)
σ2(1+b2)2
we
get that
P ({X ≥ µ + bσ}) ≤ 1
1 + b2
.
9.19
xP ({X > x}) ≤ E[X1{X > x}]
=
∫ ∞
x
z√
2pi
e−
z2
2 dz
=
e−
x2
2√
2pi
Hence
P ({X > x}) ≤ e
−x2
2
x
√
2pi
9
.
9.21 h(t+s) = P ({X > t+s}) = P ({X > t+s,X > s}) = P ({X > t+s|X > s})P ({X >
s}) = h(t)h(s) for all t, s > 0. Note that this gives h( 1
n
) = h(1)
1
n and h(m
n
) = h(1)
m
n . So
for all rational r we have that h(r) = exp (log(h(1))r). Since h is right continuous this
gives h(x) = exp(log(h(1))x) for all x > 0. Hence X has exponential distribution with
parameter − log h(1).
10
Solutions to selected problems of Chapter 10
10.5 Let P be the uniform distribution on [−1/2, 1/2]. Let X(x) = 1[−1/4,1/4] and Y (x) =
1[−1/4,1/4]c . It is clear that XY = 0 hence E[XY ] = 0. It is also true that E[X] = 0. So
E[XY ] = E[X]E[Y ] however it is clear that X and Y are not independent.
10.6 a. P (min(X, Y ) > i) = P (X > i)P (Y > i) = 1
2i
1
2i
= 1
4i
. So P (min(X, Y ) ≤ i) =
1− P (min(X,Y ) > i) = 1− 1
4i
.
b. P (X = Y ) =
∑∞
i=1 P (X = i)P (Y = i) =
∑∞
i=1
1
2i
1
2i
= 1
1− 1
4i
− 1 = 1
3
.
c. P (Y > X) =
∑∞
i=1 P (Y > i)P (X = i) =
∑∞
i=1
1
2i
1
2i
= 1
3
.
d. P (X divides Y ) =
∑∞
i=1
∑∞
k=1
1
2i
1
2ki
=
∑∞
i=1
1
2i
1
2i−1 .
e. P (X ≥ kY ) = ∑∞i=1 P (X ≥ ki)P (Y = i) = ∑∞i=1 12i 12ki−1 = 22k+1−1 .
11
Solutions to selected problems of Chapter 11
11.11. Since P{X > 0} = 1 we have that P{Y < 1} = 1. So FY (y) = 1 for y ≥ 1. Also
P{Y ≤ 0} = 0 hence FY (y) = 0 for y ≤ 0. For 0 < y < 1 P{Y > y} = P{X < 1−yy } =
FX(
1−y
y
). So
FY (y) = 1−
∫ 1−y
y
0
fX(x)dx = 1−
∫ y
0
−1
z2
fX(
1− z
z
)dz
by change of variables. Hence
fY (y) =
0 −∞ < y ≤ 0
1
y2
fX(
1−y
y
) 0 < y ≤ 1
0 1 ≤ y <∞
11.15 Let G(u) = inf{x : F (x) ≥ u}. We would like to show {u : G(u) > y} = {u :
F (Y ) < u}. Let u be such that G(u) > y. Then F (y) < u by definition of G. Hence
{u : G(u) > y} ⊂ {u : F (Y ) < u}. Now let u be such that F (y) < u. Then y < x for any x
such that F (x) ≥ u by monotonicity of F . Now by right continuity and the monotonicity of
F we have that F (G(u)) = infF (x)≥u F (x) ≥ u. Then by the previous statement y < G(u).
So {u : G(u) > y} = {u : F (Y ) < u}. Now P{G(U) > y} = P{U > F (y)} = 1− F (y) so
G(U) has the desired distribution. Remark:We only assumed the right continuity
of F .
12
Solutions to selected problems of Chapter 12
12.6 Let Z = ( 1
σY
)Y − (ρXY
σX
)X. Then σ2Z = (
1
σ2Y
)σ2Y − (ρ
2
XY
σ2X
)σ2X − 2( ρXYσXσY )Cov(X, Y ) =
1− ρ2XY . Note that ρXY = ∓1 implies σ2Z = 0 which implies Z = c a.s. for some constant
c. In this case X = σX
σY ρXY
(Y − c) hence X is an affine function of Y .
12.11 Consider the mapping g(x, y) = (
√
x2 + y2, arctan(x
y
)). Let S0 = {(x, y) : y = 0},
S1 = {(x, y) : y > 0}, S2 = {(x, y) : y < 0}. Note that ∪2i=0Si = R2 and m2(S0) = 0.
Also for i = 1, 2 g : Si → R2 is injective and continuously differentiable. Corresponding
inverses are given by g−11 (z, w) = (z sinw, z cosw) and g
−1
2 (z, w) = (z sinw,−z cosw). In
both cases we have that |Jg−1i (z, w)| = z hence by Corollary 12.1 the density of (Z,W ) is
given by
fZ,W (z, w) = (
1
2piσ2
e
−z2
2σ z +
1
2piσ2
e
−z2
2σ z)1(−pi
2
,pi
2
)(w)1(0,∞)(z)
=
1
pi
1(−pi
2
,pi
2
)(w) ∗ z
σ2
e
−z2
2σ 1(0,∞)(z)
as desired.
12.12 Let P be the set of all permutations of {1, . . . , n}. For any pi ∈ P let Xpi be the
corresponding permutation of X, i.e. Xpik = Xpik . Observe that
P (Xpi1 ≤ x1, . . . , Xpin ≤ xn) = F (x1) . . . F (Xn)
hence the law of Xpi and X coincide on a pisystem generating Bn therefore they are equal.
Now let Ω0 = {(x1, . . . , xn) ∈ Rn : x1 < x2 < . . . < xn}. Since Xi are i.i.d and have
continuous distribution PX(Ω0) = 1. Observe that
P{Y1 ≤ y1, . . . , Yn ≤ yn} = P (∪pi∈P{Xpi1 ≤ y1, . . . , Xpin ≤ yn} ∩ Ω0)
Note that {Xpi1 ≤ y1, . . . , Xpin ≤ yn} ∩ Ω0, pi ∈ P are disjoint and P (Ω0 = 1) hence
P{Y1 ≤ y1, . . . , Yn ≤ yn} =
∑
pi∈P
P{Xpi1 ≤ y1, . . . , Xpin ≤ yn}
= n!F (y1) . . . F (yn)
for y1 ≤ . . . ≤ yn. Hence
fY (y1, . . . , yn) =
{
n!f(y1) . . . f(yn) y1 ≤ . . . ≤ yn
0 otherwise
13
Solutions to selected problems of Chapter 14
14.7 ϕX(u) is real valued iff ϕX(u) = ϕX(u) = ϕ−X(u). By uniqueness theorem ϕX(u) =
ϕ−X(u) iff FX = F−X . Hence ϕX(u) is real valued iff FX = F−X .
14.9 We use induction. It is clear that the statement is true for n = 1. Put Yn =∑n
i=1Xi and assume that E[(Yn)
3] =
∑n
i=1E[(Xi)
3]. Note that this implies d
3
dx3
ϕYn(0) =
−i∑ni=1E[(Xi)3]. Now E[(Yn+1)3] = E[(Xn+1 + Yn)3] = −i d3dx3 (ϕXn+1ϕYn)(0) by indepen-
dence of Xn+1 and Yn. Note that
d3
dx3
ϕXn+1ϕYn(0) =
d3
dx3
ϕXn+1(0)ϕYn(0)
+ 3
d2
dx2
ϕXn+1(0)
d
dx
ϕYn(0) + 3
d
dx
ϕXn+1(0)
d2
dx2
ϕYn(0)
+ ϕXn+1(0)
d3
dx3
ϕYn(0)
=
d3
dx3
ϕXn+1(0) +
d3
dx3
ϕYn(0)
= −i
(
E[(Xn+1)
3] +
n∑
i=1
E[(Xi)
3]
)
where we used the fact that d
dx
ϕXn+1(0) = iE(Xn+1) = 0 and
d
dx
ϕYn(0) = iE(Yn) = 0. So
E[(Yn+1)
3] =
∑n+1
i=1 E[(Xi)
3] hence the induction is complete.
14.10 It is clear that 0 ≤ ν(A) ≤ 1 since
0 ≤
n∑
j=1
λjµj(A) ≤
n∑
j=1
λj = 1.
Also for Ai disjoint
ν(∪∞i=1Ai) =
n∑
j=1
λjµj(∪∞i=1Ai)
=
n∑
j=1
λj
∞∑
i=1
µj(Ai)
=
∞∑
i=1
n∑
j=1
λjµj(Ai)
=
∞∑
i=1
ν(Ai)
14
Hence ν is countably additive therefore it is a probability mesure. Note that
∫
1Adν(dx) =∑n
j=1 λj
∫
1A(x)dµj(dx) by definition of ν. Now by linearity and monotone convergence
theorem for a non-negative Borel function f we have that
∫
f(x)ν(dx) =
∑n
j=1 λj
∫
f(x)dµj(dx).
Extending this to integrable f we have that νˆ(u) =
∫
eiuxν(dx) =
∑n
j=1 λj
∫
eiuxdµj(dx) =∑n
j=1 λjµˆj(u).
14.11 Let ν be the double exponential distribution, µ1 be the distribution of Y and µ2 be
the distribution of −Y where Y is an exponential r.v. with parameter λ = 1. Then we
have that ν(A) = 1
2
∫
A∩(0,∞) e
−xdx+ 1
2
∫
A∩(−∞,0) e
xdx = 1
2
µ1(A) +
1
2
µ2(A). By the previous
exercise we have that νˆ(u) = 1
2
µˆ1(u) +
1
2
µˆ2(u) =
1
2
( 1
1−iu +
1
1+iu
) = 1
1+u2
.
14.15. Note that E{Xn} = (−i)n dn
dxn
ϕX(0). Since X ∼ N(0, 1) ϕX(s) = e−s2/2. Note that
we can get the derivatives of any order of e−s
2/2 at 0 simply by taking Taylor’s expansion
of ex:
e−s
2/2 =
∞∑
i=0
(−s2/2)n
n!
=
∞∑
i=0
1
2n!
(−i)2n(2n)!
2nn!
s2n
hence E{Xn} = (−i)n dn
dxn
ϕX(0) = 0 for n odd. For n = 2k E{X2k} = (−i)2k d2kdx2kϕX(0) =
(−i)2k (−i)2k(2k)!
2kk!
= (2k)!
2kk!
as desired.
15
Solutions to selected problems of Chapter 15
15.1 a. E{x} = 1
n
∑n
i=1E{Xi} = µ.
b. Since X1, . . . , Xn are independent Var(x) =
1
n2
∑n
i=1 Var{Xi} = σ
2
n
.
c. Note that S2 = 1
n
∑n
i=1(Xi)
2 − x2. Hence E(S2) = 1
n
∑n
i=1(σ
2 + µ2)− (σ2
n
+ µ2) =
n−1
n
σ2.
15.17 Note that ϕY (u) =
∏α
i=1 ϕXi(u) = (
β
β−iu)
α which is the characteristic function
of Gamma(α,β) random variable. Hence by uniqueness of characteristic function Y is
Gamma(α,β).
16
Solutions to selected problems of Chapter 16
16.3 P ({Y ≤ y}) = P ({X ≤ y} ∩ {Z = 1}) + P ({−X ≤ y} ∩ {Z = −1}) = 1
2
Φ(y) +
1
2
Φ(−y) = Φ(y) since Z and X are independent and Φ(y) is symmetric. So Y is normal.
Note that P (X + Y = 0) = 1
2
hence X + Y can not be normal. So (X, Y ) is not Gaussian
even though both X and Y are normal.
16.4 Observe that
Q = σXσY
[ σX
σY
ρ
ρ σY
σX
]
So det(Q) = σXσY (1− ρ2). So det(Q) = 0 iff ρ = ∓1. By Corollary 16.2 the joint density
of (X,Y ) exists iff −1 < ρ < 1. (By Cauchy-Schwartz we know that −1 ≤ ρ ≤ 1). Note
that
Q−1 =
1
σXσY (1− ρ2)
σY
σX
−ρ
−ρ σX
σY
Substituting this in formula 16.5 we get that
f(X,Y )(x, y) =
1
2piσXσY (1− ρ2) exp
{
−1
2(1− ρ2)
((
x− µX
σX
)2
− 2ρ(x− µX)(y − µY )
σXσY
+
(
y − µY
σY
)2)}
.
16.6 By Theorem 16.2 there exists a multivariate normal r.v. Y with E(Y ) = 0 and a
diagonal covariance matrix Λ s.t. X − µ = AY where A is an orthogonal matrix. Since
Q = AΛA∗ and det(Q) > 0 the diagonal entries of Λ are strictly positive hence we can
define B = Λ−1/2A∗. Now the covariance matrix Q˜ of B(X − µ) is given by
Q˜ = Λ−1/2A∗AΛA∗AΛ−1/2
= I
So B(X − µ) is standard normal.
16.17 We know that as in Exercise 16.6 if B = Λ−1/2A∗ where A is the orthogonal matrix s.t.
Q = AΛA∗ then B(X−µ) is standard normal. Note that this gives (X−µ)∗Q−1(X−µ) =
(X − µ)∗B∗B(X − µ) which has chi-square distribution with n degrees of freedom.
17
Solutions to selected problems of Chapter 17
17.1 Let n(m) and j(m) be such that Ym = n(m)
1/pZn(m),j(m). This gives that P (|Ym| >
0) = 1
n(m)
→ 0 as m → ∞. So Ym converges to 0 in probability. However E[|Ym|p] =
E[n(m)Zn(m),j(m)] = 1 for all m. So Ym does not converge to 0 in L
p.
17.2 Let Xn = 1/n. It is clear that Xn converge to 0 in probability. If f(x) = 1{0}(x) then
we have that P (|f(Xn) − f(0)| > �) = 1 for every � ≥ 1, so f(Xn) does not converge to
f(0) in probability.
17.3 First observe that E(Sn) =
∑n
i=1E(Xn) = 0 and that Var(Sn) =
∑n
i=1 Var(Xn) = n
since E(Xn) = 0 and Var(Xn) = E(X
2
n) = 1. By Chebyshev’s inequality P (|Snn | ≥ �) =
P (|Sn| ≥ n�) ≤ Var(Sn)n2�2 = nn2�2 → 0 as n→∞. Hence Snn converges to 0 in probability.
17.4 Note that Chebyshev’s inequality gives P (|Sn2
n2
| ≥ �) ≤ 1
n2�2
. Since
∑∞
i=1
1
n2�2
<∞ by
Borel Cantelli Theorem P (lim supn{|Sn2n2 | ≥ �}) = 0. Let Ω0 =
(
∪∞m=1 lim supn{|Sn2n2 | ≥ 1m}
)c
.
Then P (Ω0) = 1. Now let’s pick w ∈ Ω0. For any � there exists m s.t. 1m ≤ � and
w ∈ (lim supn{|Sn2n2 | ≥ 1m})c. Hence there are finitely many n s.t. |
Sn2
n2
|
本文档为【Probability Essentials Solutions部分答案】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑,
图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。