Programming Assignment – Aggregating ACS PUMS Data For this assignment, you will
ID: 3596088 • Letter: P
Question
Programming Assignment – Aggregating ACS PUMS Data
For this assignment, you will work with the ACS PUMS dataset as below to produce several tables which aggregate the data.
Introduction For this assignment, you will work with a survey dataset and use the matplotlib package to visualize data. The data set you will be working with comes from the 2013 American Community Survey (ACS) data. According to census.gov, ACS “is a mandatory, ongoing statistical survey that samples a small percentage of the population every year -- giving communities the information they need to plan investments and services.” [see http://www.census.gov/acs/www/]
More specifically, you will be using the ACS Public Use Microdata Sample (PUMS), which census.gov describes as “files [that] are a set of untabulated records about individual people or housing units.”
You can download the 2013 ACS 1-year PUMS data for Illinois Housing Unit Records here: http://www.census.gov/acs/www/data_documentation/pums_data/
You can also access documentation for the PUMS dataset, including the Data Dictionary, here: http://www.census.gov/acs/www/data_documentation/pums_documentation/
Requirements :
You are to create a program in Python that performs the following using the pandas packages:
1. Loads the ss13hil.csv file that contains the PUMS dataset (assume it's in the current directory) and create a DataFrame object from it.
2. Create 3 tables:
TABLE 1: Statistics of HINCP - Household income (past 12 months), grouped by HHT - Household/family type
Table should use the HHT types (text descriptions) as the index
Columns should be: mean, std, count, min, max
Rows should be sorted by the mean column value in descending order
TABLE 2: HHL - Household language vs. ACCESS - Access to the Internet (Frequency Table)
Table should use the HHL types (text descriptions) as the index
Columns should the text descriptions of ACCESS values
Each table entry is the sum of WGTP column for the given HHL/ACCESS combination, divided by the sum of WGTP values in the data. Entries need to be formatted as percentages.
Table should include marginal values ('All' row and column).
Any rows containing NA values in HHL, ACCESS, or WGTP columns should be excluded.
TABLE 3: Quantile Analysis of HINCP - Household income (past 12 months)
Rows should correspond to different quantiles of HINCP: low (0-1/3), medium (1/3-2/3), high (2/3-1)
Columns displayed should be: min, max, mean, household_count
The household_count column contains entries with the sum of WGTP values for the corresponding range of HINCP values (low, medium, or high)
3. Display the tables to the screen as shown in the sample output on the last page.
Additional Requirements
1. The name of your source code file should be tables.py. All your code should be within a single file. 2. You need to use the pandas DataFrame object for storing and manipulating data.
3. Your code should follow good coding practices, including good use of whitespace and use of both inline and block comments.
4. You need to use meaningful identifier names that conform to standard naming conventions.
5. At the top of each file, you need to put in a block comment with the following information: your name, date, course name, semester, and assignment name.
6. The output should exactly match the sample output shown on the last page.
What to Turn In
You will turn in the single tables.py file
HINTS
To get the right output, use the following functions to set pandas display parameters:
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
To display entries as percentages, use the applymap method, giving it a string conversion function as input. The string conversion function should take a float value v as an input and output a string representing v as a percentage. To do this, you can use formatting strings or the format() method
Output should be same as following sample program output:
Sample Prorram 0ut put 70-511, [semester] [year] NAME: [put your name here] PROGRAMMING ASSIGNMENT #7 * Table 1 - Descriptive Statistics of HINCP, grouped by HHT** mean std count min max HHT Household/family type Married couple household Nonfamily household:Male householder: Not living alone Nonfamily household:Female householder: Not living alone Other family household:Male householder, no wife present Other family household:Female householder, no husband present 49638.428821 48004.399101 57185100 609000 Nonfamily household:Male householder: Living alone Nonfamily household: Female householder:Living alone 106790.565562 100888.917804 25495-5100 1425000 0 625000 0 645000 0 610000 79659.567376 74734.380152 1410 69055.725901 63871.751863 1193 64023.122122 59398.970193 1998 48545.356298 60659.516163 5835 -5100 681000 37282.245015 44385.091076 8024 -11200 676000 * Table 2 - HHL vs. ACCESS Frequency Table*** sum ACCESS HHL Household language English only Spanish Other Indo-European languages Asian and Pacific Island languages Other language All Yes w Subsrc. Yes, wo/ Subsrc No All 58 . 71% 7.83% 5·11% 2.73% 80% 75·19% 2.93% 0.52% 0.18% 0.06% 0.03% 3.73% 16.87% 2.60% 1.19% 0.28% 0.14% 21.08% 78.51% 10.95% 6.48% 3.08% 0.97% 100.00% Table 3 Quantile Analysis of HINCP Household income (past 12 months)** min mean household count max HINCP -11200 37200 19599.486904 Ow medium 37210 81500 57613.846298 high 81530 1425000 159047.588900 1629499 1575481 1578445Explanation / Answer
import numpy as np import holoviews as hv import datashader as ds from holoviews.operation.datashader import aggregate, shade, datashade, dynspread from holoviews.operation import decimate hv.extension('bokeh') decimate.max_samples=1000 dynspread.max_px=20 dynspread.threshold=0.5 def random_walk(n, f=5000): """Random walk in a 2D space, smoothed with a filter of length f""" xs = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum() ys = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum() xs += 0.1*np.sin(0.1*np.array(range(n-1+f))) # add wobble on x axis xs += np.random.normal(0, 0.005, size=n-1+f) # add measurement noise ys += np.random.normal(0, 0.005, size=n-1+f) return np.column_stack([xs, ys]) def random_cov(): """Random covariance for use in generating 2D Gaussian distributions""" A = np.random.randn(2,2) return np.dot(A, A.T) def time_series(T = 1, N = 100, mu = 0.1, sigma = 0.1, S0 = 20): """Parameterized noisy time series""" dt = float(T)/N t = np.linspace(0, T, N) W = np.random.standard_normal(size = N) W = np.cumsum(W)*np.sqrt(dt) # standard brownian motion X = (mu-0.5*sigma**2)*t + sigma*W S = S0*np.exp(X) # geometric brownian motion return S
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.