Home: Live in Indiana. Married with child.
Work: Platform Engineer at GitHub.
Play: Basketball. Basketball. Basketball.
Lead developer for website management system for University of Notre Dame. Now used for hundreds of public websites at the University. Multi-tenant, asset management, full theme customization, Ruby, Rails, MySQL and Liquid templating. 2007-2008.
Built at Ordered List. Amazingly simple and beautiful content management system. I wrote several articles about developing Harmony. Multi-tenant, asset management, full theme customization, innovative theme/content custom data solution, Ruby, Rails, MongoDB and Liquid templating. 2010.
Consulted for New Toy, Inc. and Zynga, Inc. (after their acquisition of New Toy) on the backend application that powered their popular “with friends” games (chess, words and hanging at the time). Helped scale the application from thousands of requests per minute and one database server to millions of requests per minute and hundreds of database servers, including the launch Words with Friends on Facebook. Ruby, Rails, MySQL, (lots of) Memcache and Redis. 2010-2011.
Built at Ordered List. A beautiful, intuitive, hosted, real-time web analytics system. I wrote several articles about developing Gauges. Ruby, Sinatra, Kestrel, EventMachine, ZeroMQ, Web Sockets and MongoDB. 2011.
Built at Ordered List. Share presentations without the mess. Most of my work on this was product, performance and maintenance. Ruby, Rails, MongoDB, Postgres, Redis, ImageMagick, Ghostscript and Heroku. 2011.
Worked with a small team of developers (2-5) to build a collection, processing and storage system for data. The system powers the repository traffic graphs on GitHub.com and is used internally for several purposes (analysis, archival, etc.). As of January 2016, the system has collected over 25TB of data and receives 300-500 requests per second (request can be 1 or more raw events) on a handful of servers. Ruby, Rails, Kestrel, Golang, S3 and Cassandra. 2012-2014.
Haystack is GitHub’s internal exception tracker. When bad things happen on GitHub.com, they go to Haystack, which means it is critical during availability events. In June of 2014, Haystack was struggling with spikes of 30-40 exceptions a second. After a few weeks of performance work, Haystack was handling spikes of 400 exceptions per second on the exact same hardware. The tl;dr was dramatically fewer network calls (ala Fewer and Faster).
Notifications is one of the most important and highest throughput features on GitHub.com. The feature accounted for half of the storage and over a quarter of the replication load on our primary MySQL cluster (as of June 2014). I worked on application changes that made it super easy to point all notifications queries to a new cluster. Interfaces were created, joins were removed, stats were instrumented, graphs were created and the whole thing went off without a hitch in February 2015. Ruby, Rails, ActiveRecord and SQL. 2014-2015.
Moving notifications to a new cluster (see above) created a new way for GitHub.com to fail. I worked with another developer to make GitHub.com gracefully handle issues with the notifications cluster. Method calls were wrapped with response objects, callers were updated to handle failure and circuit breakers were sprinkled in (see jnunemaker/resilient). Ruby, Rails and more. 2015.
Atom is GitHub’s hackable text editor for the 21st century. Atom.io is the backend that powers Atom’s built-in package management. An Atom user myself, I noticed some slowness when interacting with the package manager in early April 2016. I poked around a bit and found that Atom.io was indeed in need of a boost. After a few rounds of fewer and faster and a little over a week of work, I dropped Atom.io’s p99 request time from ~1-2 seconds to ~90ms.