<\body> ||<\author-address> >> Following , we can write the model down as follows: <\equation*> y = \+\>, \ \ \ \ 1\i\t, 1\j\r . The model can be rewritten as a General Linear Model: <\equation*> >>|>>|>>|>>>|>>|>>|>>|>>|>>|>>>>>>=|||>|>||||>|>|>|>|>|>|>>||||>|>||||>|>|>|>|>|>|>>||||>|>||||>|>>>>>>|>>|>>|>>>>>+>>|>>|>>|>>>|>>|>>|>>|>>|>>|>>>|>>>>. <\itemize> This has the generic form + \> where is the design matrix. The -th group of rows of repeats exactly >-times. R does not use the above model literally but first rewrites it in the form: <\eqnarray*> >||+\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ for 1\j\r,>>|>||+\+\ \ for \ \ i\2 \ and 1\j\r>>>> This model is valid when the default contrasts are used. Thus, the correspondence with the first model is established by defining: <\equation*> \=\-\ \ \ \ \ for i=2,3,\,t. Hence, > plays the role of the intercept and is recorded as such by various R functions. The linear model of a completely randomized design is as follows: \; <\equation*> >>|>>|>>|>>>|>>|>>|>>|>>|>>|>>>>>>=|||>|>||||>|>|>|>|>|>|>>||||>|>||||>|>|>|>|>|>|>>||||>|>||||>|>>>>>>|>>|>>|>>>>>+>>|>>|>>|>>>|>>|>>|>>|>>|>>|>>>|>>>>. That is, the first column of the model consists of ones, and the remaining columns are unchanged. Alternatively, we may interpret this form as R using the contrasts =\-\> and thus supporting the pairwise comparison of the successive means to the first mean. We start with the linear system: <\equation*> y=A \ + \. The problem of minimizing the norm y-A \\<\|\|\>> is solved by formally multiplying the above by >: <\equation*> Ay = AA\. We solve for >, so the solution is: <\equation*> \ = (AA)A y. This is indeed true when the matrix A> is invertible, which is quite common.\ \; The QR-algorithm, which is a practical implementation of the Gram-Shmidt orthogonalization process, is applied to the matrix to facilitate the solution of the system. We recall that the QR-algorithm produces the following factorization: <\equation*> A = Q R where is a matrix with orthogonal columns, and is a non-singular (and thus square) upper-triangular matrix. Thus, Q = I>. Hence, <\equation*> AA=(Q R)(Q R)=RQ Q R = RI R = RR. and y=RQy>. Thus, the system y = AA\> can be rewritten as <\equation*> RQy=RR\. Multiplying both sides by )> we obtain the simplified equation: <\equation*> R\ = Qy. Hence, the solution may be expressed as: <\equation*> \ = RQy. In practice, because is upper triangular, > would never be computed. The system would be solved by back-substitution. \; We will do some ``reverse engineering'' of the R function . We will not call directly, but simulate its functionality by calling small bits of code that can be found in and that it calls. Both functions perform least squares fitting of a model to data. The implementations of these functions can be examined by typing in their names at the R prompt. However, any production code addresses many concerns that have nothing to do with the mathematical idea behind it, such as error checking, parsing etc. This makes the code long and hard to understand unless you are a regular R developer. Therefore we will write a mock in a form of a minimalistic script that produces equivalent result for a narrow class of problems. There are several benefits: <\itemize> We understand the algorithms better We gain the knowledge of the R primitives that go into implementing ANOVA with R. We can reuse these primitives in other situations. For instance, we may implement non-standard versions of ANOVA to solve specific problems for which the general framework does not work well. We define a very simple data set: <\input| >> N \- 12; t \- 4; repl \- N / t; <\input| >> Response \- 1:N <\input| >> Treatment \- as.factor(c(rep("T1", repl), rep("T2", repl), rep("T3", repl), rep("T4", repl))) <\input| >> fake.data \- data.frame(Treatment, Response) <\input| >> fake.data <\output> \ \ \ Treatment Response 1 \ \ \ \ \ \ \ \ T1 \ \ \ \ \ \ \ 1 2 \ \ \ \ \ \ \ \ T1 \ \ \ \ \ \ \ 2 3 \ \ \ \ \ \ \ \ T1 \ \ \ \ \ \ \ 3 4 \ \ \ \ \ \ \ \ T2 \ \ \ \ \ \ \ 4 5 \ \ \ \ \ \ \ \ T2 \ \ \ \ \ \ \ 5 6 \ \ \ \ \ \ \ \ T2 \ \ \ \ \ \ \ 6 7 \ \ \ \ \ \ \ \ T3 \ \ \ \ \ \ \ 7 8 \ \ \ \ \ \ \ \ T3 \ \ \ \ \ \ \ 8 9 \ \ \ \ \ \ \ \ T3 \ \ \ \ \ \ \ 9 10 \ \ \ \ \ \ \ T4 \ \ \ \ \ \ 10 11 \ \ \ \ \ \ \ T4 \ \ \ \ \ \ 11 12 \ \ \ \ \ \ \ T4 \ \ \ \ \ \ 12 <\input| >> \; > Contrasts between means can be defined by assigning an attribute to a factor. <\input| >> contrasts(Treatment) \- contr.treatment(t) <\input| >> \; > After the contrasts are assigned to a factor, they will be used in any calculation that involves this factor, unless they are overridden, for instance, by giving an argument to the function . There is an R function for creating design matrices: . <\input| >> full.dm \- model.matrix(Response ~ Treatment, fake.data) <\input| >> full.dm <\output> \ \ \ (Intercept) TreatmentT2 TreatmentT3 TreatmentT4 1 \ \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 2 \ \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 3 \ \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 4 \ \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 5 \ \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 6 \ \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 7 \ \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 8 \ \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 9 \ \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 10 \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 1 11 \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 1 12 \ \ \ \ \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ 1 attr(,"assign") [1] 0 1 1 1 attr(,"contrasts") attr(,"contrasts")\$Treatment [1] "contr.treatment" <\input| >> \; > <\input| >> reduced.dm \- model.matrix(Response ~ Treatment - Treatment, fake.data) <\input| >> reduced.dm <\output> \ \ \ (Intercept) 1 \ \ \ \ \ \ \ \ \ \ \ 1 2 \ \ \ \ \ \ \ \ \ \ \ 1 3 \ \ \ \ \ \ \ \ \ \ \ 1 4 \ \ \ \ \ \ \ \ \ \ \ 1 5 \ \ \ \ \ \ \ \ \ \ \ 1 6 \ \ \ \ \ \ \ \ \ \ \ 1 7 \ \ \ \ \ \ \ \ \ \ \ 1 8 \ \ \ \ \ \ \ \ \ \ \ 1 9 \ \ \ \ \ \ \ \ \ \ \ 1 10 \ \ \ \ \ \ \ \ \ \ 1 11 \ \ \ \ \ \ \ \ \ \ 1 12 \ \ \ \ \ \ \ \ \ \ 1 attr(,"assign") [1] 0 attr(,"contrasts") attr(,"contrasts")\$Treatment [1] "contr.treatment" <\input| >> \; > We used a somewhat artificial construction: the right-hand side of the model is , which means that the factor is , but we take it away. So, the only component left is the intercept. We use the standard approach of solving least squares problems using the QR-decomposition of matrices. Since we have to perform it twice (once for the full model and once for the reduced model) we define a function that encapsulated our calculation. <\input| >> design.fit \- function(design.matrix, y) {\ \ \ \ \ qr \- qr(design.matrix) # Find QR-decomposition \ \ \ \ Q \ \- qr.Q(qr) \ # Extract Q \ \ \ \ R \ \- qr.R(qr) \ # Extract R \ \ \ \ Ri \- solve(R) # Find the inverse of R \ \ \ \ ## We create an environment to put variables in\ \ \ \ \ env \- new.env() \ \ \ \ env\$coef \- Ri %*% crossprod(Q, y) # coefficients\ \ \ \ \ env\$err \ \- qr.resid(qr, y) # residues\ \ \ \ \ env\$resid \- sum(env\$err^2) \ \ \ \ return(as.list(env)) # convert environment to a list } <\input| >> \; \; > <\input| >> full \- design.fit(full.dm, Response) <\input| >> reduced \- design.fit(reduced.dm, Response) <\input| >> full <\output> \$resid [1] 8 \; \$err \ [1] -1.000000e+00 \ 8.049117e-16 \ 1.000000e+00 -1.000000e+00 -5.551115e-17 \ [6] \ 1.000000e+00 -1.000000e+00 \ 2.220446e-16 \ 1.000000e+00 -1.000000e+00 [11] \ 1.665335e-16 \ 1.000000e+00 \; \$coef \; \ \ \ \ \ \ \ \ \ \ \ \ [,1] (Intercept) \ \ \ 2 TreatmentT2 \ \ \ 3 TreatmentT3 \ \ \ 6 TreatmentT4 \ \ \ 9 <\input| >> reduced <\output> \$resid [1] 143 \; \$err \ [1] -5.5 -4.5 -3.5 -2.5 -1.5 -0.5 \ 0.5 \ 1.5 \ 2.5 \ 3.5 \ 4.5 \ 5.5 \; \$coef \ \ \ \ \ \ \ \ \ \ \ \ [,1] (Intercept) \ 6.5 <\input| >> \; > <\input| >> SSTotal \- sum((Response - mean(Response))^2) <\input| >> SSMean \ \- SSTotal / (N - 1) <\input| >> SSE \ \ \ \ \- full\$resid <\input| >> MSE \ \ \ \ \- SSE / (N - t) <\input| >> SST \ \ \ \ \- reduced\$resid - full\$resid <\input| >> MST \ \ \ \ \- SST / (t - 1) <\input| >> c(SSTotal, SSMean, SSE, MSE, SST, MST) <\output> [1] 143 \ 13 \ \ 8 \ \ 1 135 \ 45 <\input| >> \; > <\input| >> F \ \ \ \ \ \ \- MST / MSE <\input| >> F <\output> [1] 45 <\input| >> \; > > In the code below, we compute the object and print the summary. Subsequently, we calculate standard error for the difference of the means of treatment groups and >. <\input| >> fake.aov \- aov(Response ~ Treatment, fake.data) <\input| >> summary(fake.aov) <\output> \ \ \ \ \ \ \ \ \ \ \ \ Df Sum Sq Mean Sq F value \ \ \ Pr(\F) \ \ \ Treatment \ \ \ 3 \ \ \ 135 \ \ \ \ \ 45 \ \ \ \ \ 45 2.356e-05 *** Residuals \ \ \ 8 \ \ \ \ \ 8 \ \ \ \ \ \ 1 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ --- Signif. codes: \ 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\ <\input| >> se.contrast(fake.aov,list(Treatment == "T1", Treatment == "T2")) <\output> [1] 0.8164966 <\input| >> \; > \; \; \; \; <\equation*> \; \; \; <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> Kuehl <\associate|toc> |math-font-series||ANOVA model for completely randomized designs> |.>>>>|> |math-font-series||A transformation to the intercept format> |.>>>>|> |math-font-series||The final form of the linear model> |.>>>>|> |math-font-series||A review of the QR method for least squares> |.>>>>|> |math-font-series||The goal of what follows> |.>>>>|> |math-font-series||The data> |.>>>>|> |math-font-series||Analysis of variance steps> |.>>>>|> |Define contrasts |.>>>>|> > |Define the full model design matrix |.>>>>|> > |Define the reduced model design matrix |.>>>>|> > |Perform a least squares fit using QR-decomposition |.>>>>|> > |Solve least squares problems |.>>>>|> > |Find various sums of squares |.>>>>|> > |Find the F-statistic |.>>>>|> > |Comparing the results with |language||aov> |.>>>>|> >