Category Archives: Sports

Slides from Rocky Mtn SABR Meeting

Last Saturday I had the good fortune to present a talk on finding, gathering, and analyzing some sports-related data on the web at the local SABR group meeting.  In case you’re not familiar with the “SABR” acronym, it stands for “Society for American Baseball Research”; here’s a link to the national organization.  The talk was light on tech and heavy on graphs (predominately made in R and in particular ggplot2).  Good times were had by all.  The slides from the talk are given below.  Most of the slides and code are recycled from previous talks, so I apologize in advance if you’re already familiar with the content.  It was, however, new to the SABR people.

Advertisements

1 Comment

Filed under Data Mining, Data Science, ggplot2, R, Scraping Data, Sports

Google Trends, R, and the NFL

A week or so ago I saw a tweet related how the NFL lockout was affecting the search traffic for “fantasy football” on Google (using Google Trends).  Basically, the search traffic (properly normalized on Google Trends) was down prior to the end of the lockout.  I decided to explore a bit with this myself and chose to look into the RGoogleTrends package for R.  The RGoogleTrends package is basically a convenient way to interact with the Google Trends API using R.  Very cool stuff.  All of my code can be found at my GoogleTrends repo on github.

My first query was to pull the google trends data for the past seven or so years using the search term “fantasy football”.  The following figure shows the results over time.  It’s immediately obvious that the normalized search scores for “fantasy football” were on the decline (2011 over previous years) prior to the end of the lockout; however, it appears that interest has since recovered.

I then decided to look at the trends for “NFL”.  There isn’t a dramatic decrease in the (normalized) searches for “NFL” prior to the lockout’s end, but you do see a huge spike in searches after the announcement.

A few notes:

  • It would be interesting to align the curves in the last plot by the day of week.  That is, it would be nice to compare the trends scores, as an example, for the 7th Wednesday prior to the start of the NFL preseason or something.
  • In order to use the RGoogleTrends package, you can use the cookies in Firefox to pass username and password if you just log into gmail (or another google service).

2 Comments

Filed under Data Viz, ggplot2, R, Sports

NBA, Logistic Regression, and Mean Substitution

[Note: I wrote this on a flight and didn’t proofread it at all. You’ve been warned of possibly incoherencies!]

I’m currently sitting at about 32K feet above sea level on my way from Tampa International to DIA and my options are (1) watch a shitty romantic comedy starring Reese Witherspoon, Owen Wilson, et al. or (2) finish my blog post about the NBA data.  With a chance to also catch up on some previously downloaded podcasts, I decided on option (2).

So where was I related to the NBA analysis?  I downloaded some data and I was in the process of developing a predictive model.  I’m not going to get into the specifics of this model because it was an incredibly stupid model.  The original plan was to build a logistic regression model relating several team-based metrics (e.g., shots, assists, and blocks per game, field-goal and free throw percentage, etc.) to a team’s overall winning percentage.  I was hoping to use this model as the basis of a model for an individual player’s worth.  How?  Not sure.  In any case, I got about half-way through this exercise and realized that this was an incredibly stupid endeavor.  Why?  I’m glad you asked.

Suppose that you gave a survey to roughly 1K males and asked them several questions.  One of the questions happened to be “How tall are you (in inches)?”  The respondents were incredibly sensitive and only about half responded to this particular question.  There were other questions with various levels of missingness as well.  A histogram of the 500 answers the the aforementioned question is given in Figure 1.

Figure 1: A hypothetical sampling of 500 male heights.

One of the goals of the hypothetical survey/study is to classify these males using all of the available data (and then some).  What do I mean by the parenthetical back there?  Well, a buddy of mine suggests that we just substitute the average height for the missing heights in order to make our data set more complete.  Obviously, this isn’t going to change the average height in the data.  Are there any repercussions for doing this?  Consider the variance of the heights.  If we need to estimate the population variance of male heights, we will severely underestimate this parameter.  See Figure 2 for the density estimates of the original 500 heights and the original plus the imputed data.

Figure 2: Density estimates of the original 500 heights + 500 imputed (mean substitution) heights.

(Alter Ego:  Yo Ryan — WTF are you talking about here?  You’re supposed to be talking about the NBA and building a advanced player-evaluation metric!

Me:  I’m getting to that!)

OK, so how does this relate to what I was doing prior to the mean-substitution tangent?  Well, my model based on team metrics related to overall winning percentage was an exercise in mean substitution!  The summarized data (e.g. blocks per game or free throw percentage) are averaged overall all games and I’m trying to relate those numbers to n1 wins and n2 losses out of N = n1 + n2 games.  Essentially I would have N replicates of the averaged data (my predictor variables) and and n1 and n2 successes and failures (reps.) in the logistic regression model.  I was ignoring any variation in the individual game statistics that contributed to the individual wins and losses.

Why didn’t I just do a better job and ignore this mistake?  Basically, I felt compelled to offer up this little caveat related to data analysis.  Just because you can write a script to gather data and perhaps even build a model in something like R does not guarantee that your results are meaningful or that you know something about statistics!  If you are doing a data analysis, think HARD about any questions that you want to address, study what your particular methods are doing and any subsequent implications of using said methods, and for *$@%’s sake interpret your conclusions in the context of any assumptions.  This isn’t an exhaustive list of good data-analytic practice, but it’s not going to hurt anything.  Happy analyzing folks!

As usual, all of the code and work related to this project is available at the github repo.

8 Comments

Filed under Basic Statistics, Imputation, R, Scraping Data, Sports

NBA Analysis: Coming Soon!

I decided to spend a few hours this weekend writing the R code to scrape the individual statistics of NBA players (2010-11 only).  I originally planned to write up a few NBA-related analyses, but a friend was visiting from out of town and, of course, that means less time sitting in front of my computer…which is a good thing!  So in between an in-house concert at my place (video posted soon), the Rapids first game (a win, 3-1 over Portland), brunch, and trivia at Jonesy’s (3rd place), I did write some code.  The git repo can be found here on github.

Note that this code is having a little trouble at the moment.  I have no idea why, but it’s throwing an error when it tries to scrape the Bulls’ and the Raptors’ pages.  I’m pretty sure it’s NOT because the Bulls are awesome and the Raptors suck…though I haven’t confirmed that assertion.

In any case, let me know if you have any ideas about what I should do with this data.  Some of the concepts that I’m toying with at the moment include:

  • Comparing the before and after performances of players who were traded at or near the trading deadline, and/or
  • Examining some of the more holistic player-evaluation metrics w/r/t win-loss records for various teams.

Question:  Why didn’t you use BeautifulSoup for your scraping?  You seem to be a big proponent of python — what’s up?

Answer:  I wrote about scraping with R vs python in a previous post.  That little test was pretty conclusive in terms of speed and R won.  I am not totally convinced that I like the R syntax for xml/html parsing, but it is fast.  And me not liking the syntax is probably a result of me not being an XML expert rather a shortcoming of the XML package itself.

4 Comments

Filed under R, Scraping Data, Sports

My Crappy Fantasy Football Draft

I compared the results of my fantasy football draft with the results of more than 1500 mock drafts at the Fantasy Football Calculator (FFC).  I looked at where player X was drafted in our league, subtracted off the average draft position on FFC, and divided by the standard deviation of the draft positon of that player on FFC.  In other words, I’ve computed a ‘standardized’ draft position for the given player.

How do we interpret this standardized draft position?  Obviously if we have a positive score, then a player was drafted later in our draft than the average position on FFC.  This would mean that a team owner in our league got a pretty good deal on that player.  Understand?  Divided by the standard deviation just places all of the draft positions in a standardized unit for comparison purposes.  Here are the results of our draft.

What do we see from this?  Well, my draft sucked.  Most of my boxes in the heat map are negative!  So I drafted my players a little higher than the average draft position on FFC.  In particular, it looks like I picked Pierre Thomas way earlier.

Some positives:  Yurcy picked Randy Moss with the 18th pick and his average draft position on this website was 8.8.  Possibly the biggest winner was Rob’s 6th round pick of Wes Welker…good value there.

I’ll do the same for my league with the boys in Vermont.  Hopefully the results are a little better than what I did with the Princeton gang.

The code is published at github under ffdraft.

3 Comments

Filed under R, Sports

A Rule Change in Major League Soccer?

I have to admit that working with my Major League Soccer data set has been slow going.  There are a few reasons:  (1) I have a full-time job at the National Renewable Energy Lab and (2) the data isn’t quite as “rich” as I initially thought.  As an example, the MLS site doesn’t list the wins and losses for each team by year.  That seems to be a fundamental piece of “sports”-type data, right?  In any case, I did come across something that I can’t seem to answer.  If you know somebody that works with MLS, send ’em my email address and tell them that I want answers, damnit!

So following up on my previous MLS-related post, I wanted to see if I could pinpoint why goals per game has been decreasing in recent years.  My first thought was that with MLS expanding, more US-based players transferring overseas, etc., that the overall talent level in MLS has suffered a bit in the more recent years.  One way that this might manifest itself in the data is by having less shots “on target” or “on goal”.  Therefore, I looked at the number of shots on goal vs the number of shots and also vs the number of goals over the years.  The two figures are given next.

Based on the first figure, one could argue that the shooters are becoming a little less accurate.  That is, the number of shots on target per shot has decreased by about 10% over the course of the league’s lifetime.  Shots on goal per goal seems relatively steady over this same time period.  This might suggest that the league’s strikers are getting slightly worse whereas the quality of the keepers is holding steady.  That, of course, could contribute to the decline of goals per game.

I also decided to look at the number of assists per goal.  Why?  Well, my logic is that if there are more assists per goal, then there might be better overall team play.  Conversely, a decrease in this number might be a result of teams having one or more stars (hence, more individual brilliance) and less of the quality, build-up-type goals.  Make sense?  C’mon, I’m trying here!  Anyway, here is the resulting graph.

Whoa, what in the hell happened there?  The data look a bit suspicious.  Specifically, there seems to be a serious change between the 2002 and 2003 seasons.  So I made a similar graph, but I separated by the different time periods.  Here ya go.

What does this mean?  My hypothesis is that there was a fundamental change to the rules in how assists were recorded between the 2002 and 2003 seasons.  Unfortunately, I can’t confirm this.  I’ve searched the web, read the MLS Wikipedia page, read a good amount of the MLS website, and can’t seem to find anything related to a rules change that might result in this sort of phenomenon.  Sooooo, if you have any ideas, send ’em my way!

This will likely be the last MLS-specific post for a while.  Unless I can find some more data, I’m giving up — their data is just not that interesting.  Notice that I didn’t say that this would be my last soccer post.  Hopefully I can scrape some EPL (England’s Premiership) data.  Given that their league has been around for more than 15 years, it should be a bit more interesting than mine.

If you’re interested in taking a look at the data and/or code yourself, I’ve created a github repository for your perusal.  Feel free to pass along your comments and/or questions regarding any code — I have thick skin.

So what’s next?  I am thinking about comparing my current workflow of (a) scrape with Python and (b) analyze with R to just doing everything in R (e.g., using the xml package).  Hopefully, I can post some time comparisons soon!

Addendum:  According to at least one blogger, the recording of “secondary assists” was changed after the 2002 season.  I’m not sure why they record secondary assists in the first place — I guess MLS wanted to appeal to the hockey people in the early years.  Here is the bloggers take on secondary assists:

12 Comments

Filed under Data Viz, ggplot2, R, Sports

Are MLB Games Getting Longer?

On July 29, 2010, I had a flight from Denver to Cincinnati.  About an hour before boarding, I went to ESPN’s website and found a new article by Bill Simmons, a.k.a The Sports Guy (@sportsguy33 on Twitter).  The basic premise of this article is that a core group of fans is losing interest in Red Sox games this season.  So he decides to assign percentages to his reasons why people are losing interest (he’s a writer, not a statistician).  Anyway, he states that the “biggie”, the “killer”, etc. is the time of games (his 55%), i.e., baseball games are too damn long.  He gives some data from baseball-reference.com to back up his claim.

So what does this have to do with me flying to Cincinnati from Denver?  Well, being the nerd that I am, I immediately went to baseball-reference.com to see if I could download more data!  As an aside, I’ve been obsessing over learning how to use ggplot2 since my return from the useR2010 conference about two weeks ago.  This seems like a good time to start learning.  Ah shit…it appears that this project is going to be a little harder than I expected.  I could download the data for one team and one season before I boarded, but I wanted the past 30 seasons for all teams that played every season.  I suppose that I would have to write a scraper to collect the data from about 750 web pages.  Sweet, now I have something to do on the flight rather than obsess over whether or not the person next to me will spill over into my seat.

Now I’m on the plane.  I’ve downloaded the html of one webpage so that I can test my python scraper (using BeautifulSoup) and I have the data that Simmons used in his article.  The first thing that I do is make a few graphs using his data.  Here is the first.

He essentially discretized the length of each game into five bins:  A – less than or equal to two hours, B – more than two but less than 2 hours and 30 min, and so on.  It certainly looks like the relative size of the blue and purple rectangles together is increasing and the golden rectangle seems to be decreasing.  Maybe Simmons is onto something.  Note that this isn’t the only figure that I made.  There were quite a few others, but I want to get to the good stuff.

(Back onboard the plane.  I have two or three terminals open, Textmate with some python code open, R is open, and the dude sitting next to me is sufficiently freaked out by what’s going on.)

OK, this is getting too long.  It turns out that I didn’t finish the scraper on the flight to Cincinnati.  Fortunately there is the return flight and a few ‘down’ hours to work on this project while visiting KY.  Success!  Upon landing at DIA on Monday, I had the scraper (for the test page that I downloaded on Friday) working.  Now I needed to write a script to iterate across all teams and every year since 1970.  I wrote the script, processed the ‘soup’ for all 700+ team/season combos and I give you the following figure!

So are baseball games getting longer?  Well, the preceding figure gives the median length of games (in minutes) for all teams over each season since 1970.  It looks like it is going from blue to red, suggesting an increase in the median length of games.  However, you might also notice that I’ve added two vertical lines from 1994 through 2004.  This roughly corresponds to the “Steroids Era” in MLB.  It looks like that the game times have been decreasing a bit since the middle of this era or so.  Can we look at the data in another way?  Of course, I give you Figure 3!

Here’s a different way of displaying the same data as given in the second figure.  I didn’t separate by team in this figure, however.  I’m just looking at the median length of games across all teams for each season and added a smooth curve to show a trend to the data.  Note that the peak on the curve corresponds to roughly 1998.  If you recall, Mark McGuire and Sammy Sosa were hitting HRs at record rates and teams were scoring a lot of runs.  And it looked as if their heads might suddenly explode from all of the ‘roids that they were taking.

So what do I take from this?  Well, overall, I would say that they length of baseball games in general seems to be on the decline in the most recent years.  Is the same true for just the Red Sox?  Ah ha, I can make that figure.  Here you go.

Interestingly enough, it looks like the smooth trend line for Red Sox games is increasing in recent years whereas it’s decreasing for all other teams.  So maybe this Simmons character is onto something for his beloved Red Sox.

What do I think?  I don’t really care.  I just wanted an interesting data set to use so that I can learn a bit more about ggplot2, learn a bit more about scraping data using python/beautifulsoup, and kill some time on a flight.  So that’s my story.

Note that all of the data was obtained from baseball-reference.com.  Figures were made using the ggplot2 package in the R software.   Further, I know that I could do more with this data set from a statistical perspective.  That’s what I do, I’m a statistician.  But I have a full-time job and just wanted to learn some stuff!

Some final thoughts.  I’m happy to share the r and/or python code for this project.  Just send me a message on twitter or gmail (twitter name @ gmail) and I’ll send it your way.  Also, you’ll notice some missing data for the Red Sox, NYY, LAD, and CHC.  Why?  I’m not sure.  Apparently the layout of the html code is a bit different than the rest of the pages and the scraper was returning missing values.  I should also say that I scraped the appropriate pages when teams switched cities.  For example, I was scraping Montreal prior to 2005 for the Washington Nationals data.

Ryan

@rtelmore on Twitter!

Here is the R code:

library(ggplot2)

## His Data
sim.dat <- read.table("baseball_rs_game_times.txt",header=T)
ggplot(data = sim.dat, aes(x=Year, y=Count, fill=Cat)) 
+ geom_bar(width=1,stat='identity') + scale_fill_discrete(name="length category") 
+ scale_x_continuous("year") + scale_y_continuous("games")

ggplot(data = sim.dat, aes(x=as.factor(Year), y=Count, fill=Cat)) 
+ geom_bar(stat='identity') 
+ scale_fill_discrete(name="length",breaks=unique(sim.dat$Cat),
labels=c("(,2]","(2,2.30]","(2.30,3]","(3,4]","(4,)")) 
+ scale_x_discrete("year") + scale_y_continuous("games")

ggsave("~/Sports/Simmons/mlb_length_1.png",hei=7,wid=7)

ggplot(sim.dat, aes(Cat, y=Count, col=Count, fill=Count)) 
+ geom_bar() + facet_wrap(~ Year) 

last_plot() + scale_y_continuous("number of games") 
+ scale_colour_continuous("games") + scale_fill_continuous("games") 
+ scale_x_discrete("length of games")

ggsave("~/Sports/Simmons/mlb_length_2.png",hei=7,wid=7)

## My analysis

rs.dat <- read.csv("~/Sports/Simmons/length_of_baseball_games_20100802.csv", header=F)
names(rs.dat) <- c("team","year","mean_len","med_len","std_len","league","TNAT")

ggplot(data=rs.dat, aes(x = year, y = team, fill = med_len))
+ geom_tile() + scale_fill_continuous("minutes")

#+ opts(title = "Median Length of MLB Games by Team (in minutes)") 

last_plot() + geom_vline(x=c(1993,2004),lty=3)
ggsave("~/Sports/Simmons/mlb_length_3.png",hei=7,wid=7)

rs.dat$bs <- rs.dat$team=='BOS'

qplot(x=year, y=med_len, data=rs.dat, geom = c("point","smooth"), 
span = .5, colour=bs, ylab="length of game in minutes")

last_plot() + scale_colour_discrete(name="Boston?")
last_plot() + geom_vline(x=c(1993,2004),lty=2,col="black")
ggsave("~/Sports/Simmons/mlb_length_5.png",hei=7,wid=7)

qplot(x=year, y=med_len, data=rs.dat, geom = c("point","smooth"), 
span = .5, ylab="length of game in minutes")

last_plot() + geom_vline(x=c(1993,2004),lty=2,col="red")

ggsave("~/Sports/Simmons/mlb_length_4.png",hei=7,wid=7)

12 Comments

Filed under Basic Statistics, ggplot2, R, Sports