next up previous index
Next: Solving Constrained Optimization Problems Up: Likelihood Ratio Tests Previous: Likelihood Ratio Tests   Index

Click for printer friendely version of this HowTo

General Overview

Likelihood ratio tests are ratios of distributions using parameters derived using both constrained and unconstrained maximum likelihood. That is, the likelihood ratio test, $ \lambda$ is,

$\displaystyle \lambda = \frac{\textrm{constrained maximum likelihood}}{\textrm{unconstrained maximum likelihood}},$ (3.10.1)

where the constraint placed on the MLEs in the numerator is the hypothesis that you want to test. In Section 3.9 we saw how to solve for un-constrained MLEs. In the following examples we will see how to solve for and work with constrained MLEs.

The closer the ratio in Equation 3.10.1 is to 1, the more probable that the hypothesis that we are testing is true. The closer this ratio is to 0, the less likely that the hypothesis is correct. Almost all statistical tests can be derived from likelihood ratio tests. As usual, the best way to get a grasp of this concept is to see a few examples.

no_titleno_title

Example 3.10.1.2 (no_title)  

Imagine that we have a set of data, X, as described in Example 3.9.2.1, and we want to test to see if $ \mu = 3$. That is, let the null hypothesis be H $ _0: \mu = 3$. In Example 3.9.2.1 we derived the unconstrained maximum likelihood estimates for $ \mu $ and $ \sigma^2$ (see Equations 3.9.2 and 3.9.3). In this case, to derive the constrained MLEs we simply substitute in the value 3 wherever $ \hat{\mu}$ is used, including the derivation of $ \hat{\sigma}^2$. Thus,

$\displaystyle \hat{\mu}_c = 3 \textrm{ and } \hat{\sigma}^2_c = \frac{1}{n}\sum^n_{i=1}(x_i - 3)^2.
$

and our likelihood ratio test is:

$\displaystyle \lambda$ $\displaystyle = \frac{ \stackrel{\scriptstyle{\max}}{\mathrm{H}_0} \mathcal{L}(...
...rt 3, \hat{\sigma}^2_c)} {\mathrm{Pr}({\bf X} \vert \hat{\mu}, \hat{\sigma}^2)}$    
  $\displaystyle = \frac{\prod^{n}_{i=1} \frac{1}{\sqrt{2\pi \hat{\sigma}^2_c}} e^...
...{\sqrt{2\pi \hat{\sigma}^2}} e^{\frac{1}{2 \hat{\sigma}^2}(x_i - \hat{\mu})^2}}$    
  $\displaystyle = \frac{(2\pi \hat{\sigma}^2_c)^{-n/2} \mathrm{exp}\left\{ \frac{...
...exp} \left\{\frac{n}{2\sum(x_i - \hat{\mu})^2}\sum(x_i - \hat{\mu})^2\right\} }$    
  $\displaystyle = \frac{(\hat{\sigma}^2_c)^{-n/2} e^{\frac{n}{2}}} {(\hat{\sigma}^2)^{-n/2} e^{\frac{n}{2}}}$    
  $\displaystyle = \left(\frac{\hat{\sigma}^2} {\hat{\sigma}^2_c}\right)^{n/2}$    

$ \vert\boldsymbol{\vert}$

no_titleno_title

Example 3.10.1.4 (no_title)  

Imagine that we have the same set up as we had in Example 3.10.1.1, only this time, the hypothesis that we want to test is $ \mu \le 3$. In this case, when $ \hat{\mu} \le 3$, we let $ \hat{\mu}_c = \hat{\mu}$. However, when $ \hat{\mu} > 3$, then $ \hat{\mu}_c = 3$. Thus,

$\displaystyle \lambda =
\left\{ \begin{array}{ll}
\frac{\mathrm{Pr}({\bf X} \v...
...sigma}^2}
{\hat{\sigma}^2_c}\right)^{n/2}, & \hat{\mu} > 3
\end{array} \right.
$

Notice that the LRT for $ \mathrm{H}_0: \mu \ne 3$ is the same as the LRT for $ \mathrm{H}_0: \mu \le 3$ when $ \hat{\mu} > 3$. $ \vert\boldsymbol{\vert}$


next up previous index
Next: Solving Constrained Optimization Problems Up: Likelihood Ratio Tests Previous: Likelihood Ratio Tests   Index

Click for printer friendely version of this HowTo

Frank Starmer 2004-05-19
>