# Show that (a) G is invertible and G1 is positive definite. (b) G has a positive definite square root G1/2. 2.

10 pts) Suppose G is positive definite. Show that (a) G is invertible and G1 is positive definite. (b) G has a positive definite square root G1/2. 2. (10 pts) Suppose G is n ⇥ n non-negative definite, and ↵ is n ⇥ 1. Let V be n ⇥ 1 vector of independent N(0, 1) variables. Construct U, an n⇥1 vector of normal random variables with mean ↵ and covariance matrix G. 3. (10 pts) Consider simple linear regression where there is one response variable y and one explanatory variable x and there are n subjects with values y1,…,yn and x1,…,xn. The model is yi = 0 + 1xi + ei where e1,…,en are independent N(0, 2). Show that ˆ0 and ˆ1 are independent if x1 + ··· + xn = 0. Here, of course, ˆ0 and ˆ1 denote the least squares estimates of 0 and 1. 4. (10 pts) In the linear model, show that the square of the (sample) correlation between the response values (y1,…,yn) and the fitted values (ˆy1,…, yˆn) equals the coecient of determination, R2. 5. (10 pts) Fox problem 6.9. Suppose that the ”true” model generating a set of data is Y = ↵ + 1X1 + ✏, where the error ✏ conforms to the usual linear-regression assumptions. A researcher fits the model Y = ↵ +1X1 +2X2 +✏, which includes the irrelevant explanatory variable X2–that is, the true value of 2 is 0. Had the researcher fit the (correct) simpleregression model, the variance of B1 would have been V (B1) = 2 ✏ /⌃(Xi1 X¯1)2. (a) Is the model Y = ↵ + 1X1 + 2X2 + ✏ wrong? Is B1 for this model a biased estimator of 1? (b) The variance of B1 in the multiple-regression model is V (B1) = 1 1 r2 12 ⇥ 2 ✏ ⌃(Xi1 X¯1)2 (1) What, then, is the cost of including the irrelevant explanatory variable X