+ - 0:00:00
Notes for current slide
Notes for next slide

Hypothesis Testing

Dr. D’Agostino McGowan

1 / 8

Assumptions

  • If you just want to estimate the parameters β, you don't have to make any distributional assumptions for ϵ.
2 / 8

Assumptions

  • If you just want to estimate the parameters β, you don't have to make any distributional assumptions for ϵ.
  • This may sound different from what you learned in previous classes! The reason we have distributional assumptions is so that we can create confidence intervals and do hypothesis testing
2 / 8

Assumptions

  • Since we are doing least squares we assume the errors are independent and identically distributed with a mean of 0 and a variance of σ2.
3 / 8

Assumptions

  • Since we are doing least squares we assume the errors are independent and identically distributed with a mean of 0 and a variance of σ2.
  • For hypothesis testing + confidence intervals, let's assume the distribution is normal:

ϵN(0,σ2I)

3 / 8

Assumptions

  • Since we are doing least squares we assume the errors are independent and identically distributed with a mean of 0 and a variance of σ2.
  • For hypothesis testing + confidence intervals, let's assume the distribution is normal:

ϵN(0,σ2I)

  • Since y=Xβ+ϵ, we know yN(Xβ,σ2I)
3 / 8

Assumptions

  • Since we are doing least squares we assume the errors are independent and identically distributed with a mean of 0 and a variance of σ2.
  • For hypothesis testing + confidence intervals, let's assume the distribution is normal:

ϵN(0,σ2I)

  • Since y=Xβ+ϵ, we know yN(Xβ,σ2I)
  • From this it follows that

β^=(XTX)1XTyN(β,(XTX)1σ2)

3 / 8

Hypothesis testing

  • Often we want to compare a small model to a larger one
4 / 8

Hypothesis testing

  • Often we want to compare a small model to a larger one
  • Parsimony is great! We want to fit the smallest model that can best explain our outcome. But how do we determine if the small model is good enough?
4 / 8

Hypothesis testing

  • Often we want to compare a small model to a larger one
  • Parsimony is great! We want to fit the smallest model that can best explain our outcome. But how do we determine if the small model is good enough?
  • We can compare the RSS for the two models!
4 / 8

Hypothesis testing

  • Often we want to compare a small model to a larger one
  • Parsimony is great! We want to fit the smallest model that can best explain our outcome. But how do we determine if the small model is good enough?
  • We can compare the RSS for the two models!
  • If RSSsmallRSSlarger is small, then the fit of the small model is almost as good as the larger one!
4 / 8

Hypothesis testing

The test statistic of interest is:

RSSsmallRSSlargerRSSlarger

5 / 8

Hypothesis testing

The test statistic of interest is:

RSSsmallRSSlargerRSSlarger

How do we decide what value is "good enough"?

5 / 8

Hypothesis testing

The test statistic of interest is:

RSSsmallRSSlargerRSSlarger

How do we decide what value is "good enough"?

RSSsmallRSSlargerRSSlarger>some constant

5 / 8

Hypothesis testing

The test statistic of interest is:

RSSsmallRSSlargerRSSlarger

How do we decide what value is "good enough"?

RSSsmallRSSlargerRSSlarger>some constant

What constant should we use?

5 / 8

Hypothesis testing

The test statistic of interest is:

RSSsmallRSSlargerRSSlarger

How do we decide what value is "good enough"?

RSSsmallRSSlargerRSSlarger>some constant

What constant should we use?

  • Let's create an F-Statistic so we can compare to the F-distribtion!
5 / 8

The F-Statistic

F=RSSsmallRSSlarger/(dfsmalldflarger)RSSlarger/dflarger

6 / 8

The F-Statistic

F=RSSsmallRSSlarger/(dfsmalldflarger)RSSlarger/dflarger

  • If the small model has 3 variables and the larger model has 5, what are the degrees of freedom?
6 / 8

The F-Statistic

F=RSSsmallRSSlarger/(dfsmalldflarger)RSSlarger/dflarger

  • If the small model has 3 variables and the larger model has 5, what are the degrees of freedom?
  • small: n - 4
  • larger: n - 6
6 / 8

The F-Statistic

F=RSSsmallRSSlarger/(dfsmalldflarger)RSSlarger/dflargerFdfsmalldflarger,dflarger

7 / 8

The F-Statistic

F=RSSsmallRSSlarger/(dfsmalldflarger)RSSlarger/dflargerFdfsmalldflarger,dflarger

We will reject the null hypothesis (that small = larger) if F>Fdfsmalldflarger,dflargerα

7 / 8

The F-Statistic

What if you want to compare your full model to an intercept only model?

8 / 8

The F-Statistic

What if you want to compare your full model to an intercept only model?

H0:β1=β2==βp=0

8 / 8

The F-Statistic

What if you want to compare your full model to an intercept only model?

H0:β1=β2==βp=0

What is RSS for the small model?

8 / 8

The F-Statistic

What if you want to compare your full model to an intercept only model?

H0:β1=β2==βp=0

What is RSS for the small model?

  • TSS
8 / 8

The F-Statistic

What if you want to compare your full model to an intercept only model?

H0:β1=β2==βp=0

What is RSS for the small model?

  • TSS

F=(TSSRSS)/pRSS/(n(p+1))

8 / 8

Assumptions

  • If you just want to estimate the parameters β, you don't have to make any distributional assumptions for ϵ.
2 / 8
Paused

Help

Keyboard shortcuts

, , Pg Up, k Go to previous slide
, , Pg Dn, Space, j Go to next slide
Home Go to first slide
End Go to last slide
Number + Return Go to specific slide
b / m / f Toggle blackout / mirrored / fullscreen mode
c Clone slideshow
p Toggle presenter mode
t Restart the presentation timer
?, h Toggle this help
Esc Back to slideshow