This item requires the dataset Utilities.xls which can be found on the subject I
ID: 3884676 • Letter: T
Question
This item requires the dataset Utilities.xls which can be found on the subject Interact site.
This dataset gives corporate data on 22 US public utilities. We are interested in forming groups of similar utilities. The objects to be clustered are the utilities.
There are 8 measurements on each utility described below. An example where clustering would be useful is a study to predict the cost impact of deregulation. To do the requisite analysis economists would need to build a detailed cost model of the various utilities.
It would save a considerable amount of time and eort if we could cluster similar types of utilities and to build detailed cost models for just one ”typical” utility in each cluster and then scaling up from these models to estimate results for all utilities. The objects to be clustered are the utilities and there are 8 measurements on each utility.
X1: Fixed-charge covering ratio (income/debt)
X2: Rate of return on capital
X3: Cost per KW capacity in place
X4: Annual Load Factor
X5: Peak KWH demand growth from 1974 to 1975
X6: Sales (KWH use per year)
X7: Percent Nuclear
X8: Total fuel costs (cents per KWH)
>> Conduct Principal Component Analysis (PCA) on the data. Evaluate and comment on the Results. Should the data be normalized? Discuss what characterizes the components you consider key and justify your answer.
utility_name utility x1 x2 x3 x4 x5 x6 x7 x8 Arizona 1 1.06 9.2 151 54.4 1.6 9077 0 0.628 Boston 2 0.89 10.3 202 57.9 2.2 5088 25.3 1.555 Central 3 1.43 15.4 113 53 3.4 9212 0 1.058 Common 4 1.02 11.2 168 56 0.3 6423 34.3 0.7 Consolid 5 1.49 8.8 192 51.2 1 3300 15.6 2.044 Florida 6 1.32 13.5 111 60 -2.2 11127 22.5 1.241 Hawaiian 7 1.22 12.2 175 67.6 2.2 7642 0 1.652 Idaho 8 1.1 9.2 245 57 3.3 13082 0 0.309 Kentucky 9 1.34 13 168 60.4 7.2 8406 0 0.862 Madison 10 1.12 12.4 197 53 2.7 6455 39.2 0.623 Nevada 11 0.75 7.5 173 51.5 6.5 17441 0 0.768 NewEngla 12 1.13 10.9 178 62 3.7 6154 0 1.897 Northern 13 1.15 12.7 199 53.7 6.4 7179 50.2 0.527 Oklahoma 14 1.09 12 96 49.8 1.4 9673 0 0.588 Pacific 15 0.96 7.6 164 62.2 -0.1 6468 0.9 1.4 Puget 16 1.16 9.9 252 56 9.2 15991 0 0.62 SanDiego 17 0.76 6.4 136 61.9 9 5714 8.3 1.92 Southern 18 1.05 12.6 150 56.7 2.7 10140 0 1.108 Texas 19 1.16 11.7 104 54 -2.1 13507 0 0.636 Wisconsi 20 1.2 11.8 148 59.9 3.5 7287 41.1 0.702 United 21 1.04 8.6 204 61 3.5 6650 0 2.116 Virginia 22 1.07 9.3 174 54.3 5.9 10093 26.6 1.306Explanation / Answer
This dataset gives corporate data on 22 US public utilities. We are interested in forming groups of similar utilities. The objects to be clustered are the utilities. There are 8 measurements on each utility described below. An example where clustering would be useful is a study to predict the cost impact of deregulation. To do the requisite analysis economists would need to build a detailed cost model of the various utilities. It would save a considerable amount of time and e?ort if we could cluster similar types of utilities and to build detailed cost models for just one â€typical†utility in each cluster and then scaling up from these models to estimate results for all utilities. The objects to be clustered are the utilities and there are 8 measurements on each utility. X1: Fixed-charge covering ratio (income/debt) X2: Rate of return on capital X3: Cost per KW capacity in place X4: Annual Load Factor X5: Peak KWH demand growth from 1974 to 1975 X6: Sales (KWH use per year) X7: Percent Nuclear X8: Total fuel costs (cents per KWH) a. Conduct Principal Component Analysis (PCA) on the data. Evaluate and comment on the Results. Should the data be normalized? Discuss what characterizes the components you consider key and justify your answer. b. Briefly explain advantages and any disadvantages of using the PCA compared to other methods for this task. 2. Naïve Bayes Classifier (10%) This item requires the dataset UniversalBank.xls which can be found on the subject Interact site. The following is a business analytical problem faced by financial institutions and banks. The objective is to determine the measurements for personal loan acceptance. The dataset UniversalBank.xls contains data on 5000 customers of Universal Bank. The data include customer demographic information (age, income, etc.), the customer’s relationship with the bank (mortgage, securities account etc.), and the customer response to the last personal loan campaign (Personal Loan). Among these 5000 customers, only 480 (= 9.6%) accepted the personal loan that was offered to them in the earlier campaign. In this exercise we focus on two predictors: Online (whether or not the customer is an active user of online banking services) and Credit Card (abbreviated CC below) (does the customer hold a credit card issued by the bank), and the outcome Personal Loan (abbreviated Loan below). Partition the data into training (60%) and validation (40%) sets. a. Create a pivot table for the training data with Online as a column variable, CC as a row variable, and Loan as a secondary row variable. The values inside the cells should convey the count (how many records are in that cell). b. Consider the task of classifying a customer who owns a bank credit card and is actively using online banking services. Analyse the pivot table and calculate the probability that this customer will accept the loan offer. Note: This is the probability of loan acceptance (Loan=1) conditional on having a bank credit card (CC=1) and being an active user of online banking services (Online=1). c. Design two separate pivot tables for the training data. One will have Loan (rows) as a function of Online (columns) and the other will have Loan (rows) as a function of CC. Compute the following quantities [P(A | B) means “the probability of A given Bâ€]: i. P(CC=1 | Loan=1) (the proportion of credit card holders among the loan acceptors) ii. P(Online=1 | Loan=1) iii. P(Loan=1) (the proportion of loan acceptors) iv. P(CC=1 | Loan=0) v. P(Online=1 | Loan=0) vi. P(Loan=0) d. Using the quantities computed in (c), compute the Naive Bayes probability P(Loan=1 | CC=1, e. Based on the calculations above, suggest the best possible strategy for the customer to get the loan. Rationale This task assesses your progress towards meeting Learning Outcomes 1, 2 and 3: 1. Be able to identify and analyse business requirements for the identification of patterns and trends in data sets 2. Be able to appraise the different approaches and categories of data mining problems. 3. Be able to compare and evaluate output patterns It also partly addresses Learning Outcomes 4 and 5 Marking criteria Question HD DI CR PS FL 1 & 2 The answers are correct and complete, demonstrating that the student has thoroughly understood the specified dataset and the usage of XLMiner. The student supplies insightful observations. The answers are correct and complete, demonstrating that the student understood the specified dataset and the usage of XLMiner. The answers are correct and complete, demonstrating that the student understood the specified dataset and the usage of XLMiner. Answers are correct and complete, demonstrating that student understood the specified dataset. Answers are correct and complete. Answers are not correct and partially complete. Presentation Assignments are required to be submitted in either Word format (.doc, or .docx), Open Office format (.odf), Rich Text File format (.rtf) or .pdf format. Each assignment must be submitted as a single document. Assignments should be typed using 10 or 12 point font. APA referencing style should be used. A reference list should be included with each assessment item. All screenshots that are required should be inserted into the document in the appropriate position for each question. Screenshots that are submitted in addition to the assignment document will not be marked.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.