|
Created by Max Schnidman
about 5 years ago
|
|
Question | Answer |
Loss functions |
Measure of distance between observed values and estimates
L(y−θ(x)) |
Squared loss function |
Minimizes squared distance between observed and estimated values
Optimal is Conditional Expectation Function E[Y|X]
E[(y−θ)2|X]=E[((y−μx)−(θ−μx))2|X]
=V(y|x)+(θ−μx) |
Properties of the CEF |
θ(x)=argminE[(y−c)2|X]
ϵ=Y−E[Y|X]⟹E[ϵ|X]=0
⟹E[X′ϵ|X]=0
⟹E[h(X)ϵ|X]=0
V(ϵ)=E[V(Y|X)]
C(X,ϵ)=0 |
Best Linear Predictor (BLP) |
Xβ
β=argminE[(Y−Xβ)2]
E[X′(Y−Xβ)]=0
=E[X′X]−1E[X′Y]
V(U)=E[V(Y|X)]+E[ω2]
Omega is difference between CEF and BLP
|
Properties of i.i.d. sampling |
E[Yi]=μ
V(Yi)=σ2
C(Yi,Yj)=0
Sample average converges to population average
V(ˉY)=σ2n |
Mean Squared Error | Sum of Squared Bias and Variance |
Asymptotic properties of samples |
plimˉY=μ
plimV(Y)=0
Central Limit Theorem
|
Uniform Kernel Estimate |
frac1/n∑yi1(|xi−x0|≤δn)1/n∑1(|xi−x0|≤δn)
Limiting Distribution:
N(α,β) |
Matrix Algebra of Regressions |
bn=(X′X)−1(X′Y)=Q−1X′Y=AY
ˆY=X(X′X)−1X′Y=NY
e=Y−ˆY=Y−NY=(I−N)Y=MY |
Limiting distribution of beta |
N(0,E[X′X]−1E[X′XU2]E[X′X]−1
Sandwich form, robust against HESKD.
If model HOSKD, σ2E[X′X]−1 |
CRM assumtions |
1.E[Y|X]=Xβ
2.V(Y|X)=σ2I
3.Rank(X)=k
4. X is non-stochastic.
|
There are no comments, be the first and leave one below:
Want to create your own Flashcards for free with GoConqr? Learn more.