|
Created by Max Schnidman
over 5 years ago
|
|
Question | Answer |
Convergence in Probability |
limn→∞P(|Xn−X|>ϵ)=0 |
Almost Sure Convergence |
P({ω:Xn(ω)↛X(ω)})=0
Yn=supk≥n|Xk−X|p→0
P(supk≥n|Xk−X|>ϵ)=0 |
Direction of Convergences |
A.S.→P→D
(R)→P→D |
Additional A.S. Convergences |
∞∑k=1P(|Xn−X|>ϵ)≤|∞|⟹Xna.s.→X
Xnp→X⟹∃Xn s.t. Xnka.s.→X |
Convergence in mean of order r |
limn→∞E[[|Xn−X|]r]=0 |
Cauchy Sequence in Probability |
limn,n→∞P(|Xn−Xm|>ϵ)=0 |
Borel-Cantelli Lemma |
A=∩∞n=1∪k≥nAk
∞∑n=1P(An)<∞⟹P(A)=0 |
Continuous Mapping Theorem |
Xn→X⟹g(Xn)→g(X) in probability or A.S.
|
Convergence in Distribtuion |
∫f(x)dFn(x)→∫f(x)dF(x) |
Class of Generalized Distributions |
limxn→+∞GX1...Xn(x1,...,xn)=GX1...Xn−1(x1,...,xn−1)
limxn→−∞GX1...Xn(x1,...,xn)=0
G(−∞)≥0
G(∞)≤1 |
Helly-Bray Theorem | The class of generalized distributions is compact w.r.t. weak (distributional) convergence. |
Asymptotic Tightness |
∀ϵ>0 ∃N s.t. infn(Fn(N)−Fn(−N))>1−ϵ |
Kinchin's Law of Large Numbers |
Suppose {Xn}∞n=1 is i.i.d. sequence of r.v. w. E(Xn)=a and let Sn=n∑k=1Xk
Then Snnp→a |
Central Limit Theorem |
If 0<σ2<∞
limn→∞supx|P(Zn<x)−Φ(x)|=0
Zn=√n(Sn−a)σ |
Convergence Properties |
Xnp→c≡Xnd→c
Xnd→X,|Xn−YN|p→0,⟹Ynd→X
Xnp→X,YNp→c,⟹(Xn,Yn)d→(X,c) |
Slutsky Theorem |
Xnd→X,YNd→c⟹
1.Xn+Ynd→X+c
2.XnYnd→Xc
3.Xn/Ynd→X/c |
Lindeberg-Feller CLT Condition |
kn∑i=1E[||Yn,i||2]1{||Yn,i||>ϵ}→0
limn→∞1s2nn∑k=1E[(Xk−μk)2⋅1{|Xk−μk|>εsn}]=0 |
Delta Method |
Xnd→X,bn→0
⟹g(a+bnXn)−g(a)bnd→Xg′(a) |
Extremum Estimator |
θ0=argmaxθ∈ΘQ(θ)
Q(θ)=Eθ0[g(Y,θ)]=∫g(y,θ)F(dy,θ0) |
Uniform Convergence |
Pr(limT→∞supθ∈ΘQT(θ)=0)=1⟹QT(θ)a.s.→0
limT→∞Pr(supθ∈ΘQT(θ)<ϵ)=1⟹QT(θ)p→0 |
Assumptions for Extremum Estimation (Convergence in Probability) |
1.Θ is compact
2.ˆQT(θ) continuous in Θ
3.ˆQT(θ)p→Q(θ) uniformly
4. Identification (unique global maximum)
|
Asymptotic Normality |
1.δ2ˆQδθδθ′ exists
2.δ2ˆQ(θT)δθδθ′p→A(θ0)
3.√TδˆQ(θ0)δθd→N(0,B(θ0))
⟹√T(ˆθ−θ0)d→N(0,A(θ0)−1′B(θ0)A(θ0)−1 |
Assumptions for MLE |
1.Y∼F(⋅,θ0)
2.yt i.i.d
3.θ∈Θ⊂Rp
4. Distribution is dictated by model
|
MLE Objective Function |
L(θ)=Eθ0[logf(Y,θ) |
Identification |
Pr(ln(f(Y,θ0)≠ln(f(Y,θ∗))>0 |
Score |
δln(f(Y,θ)δθ
Gradient of log likelihood
Under typical assumptions, Expectation of 0
|
Information |
Var(s(θ,y))
Unidentified models have a singular information matrix
−E[δ2ln(f(Y,θ))δθδθ′ if regularity conditions are satisfied
|
Cramer-Rao Lower Bound |
Var(√T(ˆθ−θ0))≥I−1θ |
Asymptotic Efficiency |
limT→∞Var(ˆθT)=I−1θ |
Type I Error | Rejecting when the Null is True |
Type II Error | Not rejecting the null when it is false |
Significance Level |
Pθ(δ(X)=d1)=Pθ(X∈S1)≤α∀θ∈ΘH
0<α<1 |
Size of the Test |
supθ∈ΘhPθ(X∈S1)
with fixed α |
Power Function |
β(θ)=Pθ(δ(X)=d1) |
Test Optimization |
maxϕ(⋅)βϕ(θ)=Eθ[ϕ(X)]
s.t. Eθ[ϕ(X)]≤α |
Simple Distributions | Class of distributions with a single distribution |
Composite Distributions | Class of distributions with multiple distribution |
Likelihood Ratio Test |
P1(x)P0(x) |
P-Value |
Smallest Significance Level at which hypothesis would be rejected given observation
ˆp=ˆp(x)=inf{α:x∈Sα} |
Normal PDF |
\frac{1}{\sqrt{2\pi\sigma^2} e^{-\frac{(x - \mu)^2}{2\sigma^2} |
Bernoulli PDF |
q=1−p if x=0
p if x=1 |
Binomial PMF |
(nk)pkqn−k
(nk)=n!k!(n−k)! |
Uniform PMF |
1b−a in support
0 otherwise
|
Poisson PMF |
λke−λk! |
Cauchy PDF |
1πγ[1+(x−x0γ)2] |
Chebychev's/Markov Inequality |
P(g(X)≥r)≤E[g(X)]r |
Holder's Inequality |
|EXY|≤E|XY|≤(E|X|p)1p(E|Y|q])1q |
Jensen's Inequality |
E[g(X)]≤g(E[X]) |
There are no comments, be the first and leave one below:
Want to create your own Flashcards for free with GoConqr? Learn more.