Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

An investigation of a die-casting process resulted in the accompanying data on x

ID: 3157184 • Letter: A

Question

An investigation of a die-casting process resulted in the accompanying data on x1 = furnace temperature, x2 = die close time, and y = temperature difference on the die surface.

1250

(e) What is the coefficient of determination R2? What is the adjusted R2?

(f) Perform a model utility test on your model. What is your conclusion?

(g) Assess the linearity assumption. Does your model meet the assumptions for linearity between the regressors and the response?

(h) To assess the normality of the error terms, do the following: calculate the standardizedresiduals, use a q-q plot to determine whether they can be assumed to be normally distributed.

x1 x2 y

1250

6 80 1300 7 95 1350 6 101 1250 7 85 1300 6 92 1250 8 87 1300 8 96 1350 7 106 1350 8 108

Explanation / Answer

SUMMARY OUTPUT

I Have solved this problem in Excel.

E) First of all the regression line is given by

temperature diffrence(y)= -199.56+0.21*furnace temperature+3*die close time

R-square = 0.990692308, Adjusted R-Square = 0.987589744

F) Here I have checked the practical utility which is based on R^2, SSE and SSR.

We can see that SSR(Explained Variabillity)=715.5 which is far away from the SS(Unexplained Variability)=6.7222.

More the explained variability more is the R^2 and utility of model is also high. Here R^2 and Adjusted R^2 are almost 99%. Which shows that almost 99% variability of given data is explained by our model. Thus, we can say that our model utility is high.

G) Linearity Assumptions:

1)Assumption of linearity: Followed by this data

2) Assumption of normality: Followed

3) No or Little Multicollinearity: Followed

4) Homoscedasticity: Yes the data is homoscedastic, It has observatons which are very close to each other so their variation is minimum.

H)

RESIDUAL OUTPUT

Observation

Predicted Y

Residuals

Standard Residuals

1

80.94444444

-0.944444444

-1.03030303

2

94.44444444

0.555555556

0.606060606

3

101.9444444

-0.944444444

-1.03030303

4

83.94444444

1.055555556

1.151515152

5

91.44444444

0.555555556

0.606060606

6

86.94444444

0.055555556

0.060606061

7

97.44444444

-1.444444444

-1.575757576

8

104.9444444

1.055555556

1.151515152

9

107.9444444

0.055555556

0.060606061

Here We can say that our original residuals and standard residuals are so close to each other. We can say that they follows normality.

SUMMARY OUTPUT

Regression Statistics Multiple R 0.995335274 R Square 0.990692308 Adjusted R Square 0.987589744 Standard Error 1.058475494 Observations 9 ANOVA df SS MS F Significance F Regression 2 715.5 357.75 319.3140496 8.06355E-07 Residual 6 6.722222222 1.12037037 Total 8 722.2222222 Coefficients Standard Error t Stat P-value Intercept -199.5555556 11.6405572 -17.14312743 2.52184E-06 X Variable 1 0.21 0.008642416 24.2987603 3.19353E-07 X Variable 2 3 0.432120811 6.942502943 0.000442849 Lower 95% Upper 95% Lower 95.0% Upper 95.0% Intercept -228.0389729 -171.0721382 -228.0389729 -171.072138 X Variable 1 0.188852769 0.231147231 0.188852769 0.231147231 X Variable 2 1.942638469 4.057361531 1.942638469 4.057361531
Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote