Adam T. Bradley

Rhode Island unemployment by city

Different mapping method this time, pulling details Rhode Island unemployment by city.

## Loading required package: foreign

## Loading required package: sp

## Loading required package: grid

## Loading required package: lattice

## Checking rgeos availability: FALSE Note: when rgeos is not available,
## polygon geometry computations in maptools depend on gpclib, which has a
## restricted licence. It is disabled by default; to enable gpclib, type
## gpclibPermit()
## Loading required package: class

## Loading required package: e1071

ri <- readShapePoly('../map/ri/towns.shp')
#Taken from <a href=""></a>
riunemp <- read.csv('7-unempRI-201210.csv', strip.white=TRUE)
colnames(riunemp) <- c('NAME', 'unemployment')
riunemp$unemployment[riunemp$NAME==town] }
ri@data$unemployment <- lapply(ri@data$NAME, unemp)
#ri@data$unemployment <- nchar(as.character(ri@data$NAME))
pvar <- as.numeric(ri@data$unemployment)
intrvs <- 8
colors <- brewer.pal(intrvs, 'Greens')
class <- classIntervals(pvar, intrvs, style="quantile")
colcode <- findColours(class, colors)
plot(ri, col=colcode)
title("Rhode Island Unemployment Ratesn
by Municipality--October 2012")
legend('right', legend=names(attr(colcode, "table")),
fill=attr(colcode, "palette"), cex=0.6, bty="n")

RI Unemployment Map

NEA Appropriations by Year

Some more tinkering with R plots and readHTMLTable().

# loadfonts()
u <- ""
table = readHTMLTable(u, which = 3)
colnames(table) = c("Year", "Appropriation")
table$Year = as.numeric(sub("T2", "", table$Year))
table$Appropriation = as.numeric(sub("\$", "", gsub(",", "", table$Appropriation)))
table = aggregate(table$Appropriation, by = list(Year = table$Year), FUN = sum)
colnames(table) = c("Year", "Appropriation")
xl = (table$Year%%4) == 0
xl[1] = T
mx = max(table$Appropriation)
yp = c(0, mx/4, mx/2, mx/4 * 3, mx)
yl = as.integer(yp/1e+06)
yl = paste("$", yl, "M", sep = "")
plot(table, yaxt = "n", xaxt = "n", type = "l")
axis(1, at = table$Year[xl], labels = table$Year[xl], las = 2)
axis(2, at = yp, labels = yl, las = 2)
title("Appropriations for the National Endowment for the Arts")


(Note that the spike in 1976 is the result of a change in the start of the federal fiscal year–FY76 was actually 15 months long.)

Mapping U.S. Population Density

Another map, this time following this is.r() post.

pDensity <- read.csv("~/R/365/6-USPopulationDensity2009.csv")
pDensity$State = tolower(pDensity$State)
pDensity$State[pDensity$State=='washington dc'] =
'district of columbia'
colnames(pDensity) = c('region', 'density')
all_states <- map_data("state")
Total <- merge(all_states, pDensity, by="region")
Total <- Total[Total$region!="district of columbia",]
p <- ggplot()
p <- p + geom_polygon(data=Total,
aes(x=long, y=lat,
group = group,
) + scale_fill_continuous(low = "#bbffbb",
high = "#11ff11",
p <- p + theme_bw()  + labs(
fill = "Population Density by Staten(Persons per square mile)" ,
title = "State Incarceration Rates by Race, 2010", x="", y="") +
scale_y_continuous(breaks=c()) +
scale_x_continuous(breaks=c()) +
theme(panel.border =  element_blank())

US Population Density Map


Just experimented a bit today with reading HTML tables into R using the XML library:

u = ""
removeFootnotes = function(node) {
sub("(\[.*\])", "", xmlValue(node))
table = readHTMLTable(u, which = 2, elFun = removeFootnotes,
colClasses = c("character",
"character", "FormattedNumber",
"character", "Percent",
table = table[1:15, c(F, T, T, F, T, F)]
colnames(table) = c("Country", "Population", "% of world's population")
##           Country Population % of world&#39;s population
## 1          Â China  1.347e+09                   19.09
## 2          Â India  1.210e+09                   17.15
## 3  Â United States  3.149e+08                    4.46
## 4      Â Indonesia  2.376e+08                    3.37
## 5         Â Brazil  1.939e+08                    2.75
## 6       Â Pakistan  1.815e+08                    2.57
## 7        Â Nigeria  1.666e+08                    2.36
## 8     Â Bangladesh  1.525e+08                    2.16
## 9         Â Russia  1.433e+08                    2.03
## 10         Â Japan  1.275e+08                    1.81
## 11        Â Mexico  1.123e+08                    1.59
## 12   Â Philippines  9.234e+07                    1.31
## 13       Â Vietnam  8.784e+07                    1.24
## 14      Â Ethiopia  8.432e+07                    1.19
## 15         Â Egypt  8.295e+07                    1.18

Voter turnout in presidential elections

Some more basic R:

elec = read.csv("3-electionResults.csv")
elec = elec[c(F, T, F, F, F, F, F, F, T)]
elec$Voter.turnout = elec$Voter.turnout * 100
plot(elec, type = "l", col = "forestgreen", lwd = 3, ann = F)
title(xlab = "Year")


##       Year      Voter.turnout
##  Min.   :1896   Min.   :48.9
##  1st Qu.:1924   1st Qu.:53.1
##  Median :1952   Median :56.9
##  Mean   :1952   Mean   :58.3
##  3rd Qu.:1980   3rd Qu.:61.9
##  Max.   :2008   Max.   :79.3

Mapping cat ownership

Following this post from is.R(), here’s data on state-by-state cat-ownership rates (yoinked from ProQuest Statistical DataSets) plotted on a map.

Not incredibly useful, but it was fun figuring out how to do it.

new_theme_empty <- theme_bw()
new_theme_empty$line <- element_blank()
new_theme_empty$rect <- element_blank()
new_theme_empty$strip.text <- element_blank()
new_theme_empty$axis.text <- element_blank()
new_theme_empty$axis.title <- element_blank()
new_theme_empty$plot.margin <- structure(c(0, 0, -1, -1), unit = "lines", valid.unit = 3L,
class = "unit")
stateShapes <- map("state", plot = FALSE, fill = TRUE)
stateShapes <- fortify(stateShapes)
states = read.csv("CatOwnershipByState.csv")
catstates = states$catowning_pct
names(catstates) = tolower(states$State)
stateShapes$pctCats = catstates[stateShapes$region]
myPalette <- colorRampPalette(rev(brewer.pal(11, "Spectral")))
normalize <- function(x) {
(x - min(x, na.rm = T))/(max(x, na.rm = T) - min(x, na.rm = T))
myPalette <- colorRampPalette(myPalette(1000)[round(1000 * normalize(qbeta(1:999/1000,
1/3, 1/3)))])
myPalette <- myPalette(1000)
mapPlot <- ggplot(stateShapes, aes(x = long, y = lat, group = group, fill = pctCats))
mapPlot <- mapPlot + geom_polygon(colour = "BLACK")
mapPlot <- mapPlot + coord_map(project = "conic", lat0 = 30)
mapPlot <- mapPlot + new_theme_empty
mapPlot <- mapPlot + scale_fill_gradientn("% of householdsnwith at least one cat",
colours = myPalette)
mapPlot <- mapPlot + ggtitle("Cat Ownership by State")

Cat map

Articles of Secession as a Word Cloud

This is just some quick experimentation with R and RMarkdown, inspired by this blog post.

This code creates the word cloud below from the Confederate states’ ordinances of secession.

## Loading required package: Rcpp

## Loading required package: RColorBrewer
corp = Corpus(DirSource("./secession"))
corp = tm_map(corp, removePunctuation)
corp = tm_map(corp, tolower)
corp = tm_map(corp, removeWords, words = stopwords("en"))
tdm = TermDocumentMatrix(corp)
m <- as.matrix(tdm)
v <- sort(rowSums(m), decreasing = TRUE)
d <- data.frame(word = names(v), freq = v)
words = d[d$freq > 6, ]
colors = brewer.pal(9, "Set1")
wordcloud(words$word, words$freq, scale = c(6, 0.5),
min.freq = 5, max.words = Inf, random.order = FALSE,
rot.per = 0.15, colors = colors)

Word cloud

Technological Determinism and the Myth of Self-Reinvention

Film noir sometimes functions as a kind of critique of the claims of ahistoricity of the consumer subject. Noir protagonists are often on the run from their pasts, and we are there to witness the moment when it finally catches up to them, often ending in their death. So the genre stages the failure of individual self-reinvention, and implicitly offers a critique of consumer capitalism and the American Dream. Could this tradition of critique be advanced in the context of today’s subjectivities? What history is repressed by social media?…

The internet is no longer marginal phenomenon, and the social context you thought you left behind is back with a vengance. The internet turns out to be a wonderful surveillance device, which certainly doesn’t help those who really need to get away from their families or evade authoritarian governments. Silicon Valley startups like Airbnb and Uber operate like web startups, existing only in the supposed void of cyberspace, until it turns out that you actually need licenses from city governments to run a taxi service, and maybe asking people to run illegal boarding houses could result in large fines for them. Others, like Khan Academy make fatuous claims that their simple websites solve problems that have vexed educators for decades, if not centuries.

Mike Bulajewski, “Technological Determinism and the Myth of Self-Reinvention

On Copyright Sharks

Unfortunately, much of the current argument is between different species of corporate shark, betweeen the sharks who want to eat the minnows for free now and the sharks who have extensive copyrights that they are deriving rents from, who want to extend the term to ridiculous lengths. Neither species of shark has a case. But novelists, poets, artists and photographers do. A case not to be ripped off by sharks.

Chris Bertram

The Problem with (Strike) Debt

But why the intense focus on debt and its relief? Debt could be an excellent point of entry into a discussion about many other things. Why so much personal debt? Because wages are stagnant or down, unemployment is high, yet the cost of living continues to rise. Why so much mortgage debt? Because until sometime in 2007, housing inflation (meaning tax-subsidized homeownership) was practically the American national religion. Why so much student debt? Because higher education is too expensive—in fact, it should be free. Etc. But Occupy has inherited a lot of American populism’s obsession with finance as the root of all evil, without connecting it to the rest of the system.

Doug Henwood, “The Problem with (Strike) Debt