Cassandra @ Twitter: An Interview with Ryan King

There have been confirmed rumors[1] about Twitter planning to use Cassandra for a long time. But except the mentioned post, I couldn’t find any other references.

Twitter is fun by itself and we all know that NoSQL projects love Twitter. So, imagine how excited I was when after posting about Cassandra 0.5.0 release, I received a short email from Ryan King, the lead of Cassandra efforts at Twitter simply saying that he would be glad to talk about these efforts.

So without further ado, here is the conversation I had with Ryan King (@rk) about Cassandra usage at Twitter:

MyNoSQL: Can you please start by stating the problem that lead you to look into NoSQL?

Ryan King: We have a lot of data, the growth factor in that data is huge and the rate of growth is accelerating.

We have a system in place based on shared mysql + memcache but its quickly becoming prohibitively costly (in terms of manpower) to operate. We need a system that can grow in a more automated fashion and be highly available.

MyNoSQL: I imagine you’ve investigated many possible approaches, so what are the major solutions that you have considered?

Ryan King:

  • A more automated sharded mysql setup
  • Various databases: HBase, Voldemort, MongoDB, MemcacheDB, Redis, Cassandra, HyperTable and probably some others I’m forgetting.

MyNoSQL: What kind of tests have you run to evaluate these systems?

Ryan King: We first evaluated them on their architectures by asking many questions along the lines of:

  • How will we add new machines?
  • Are their any single points of failure?
  • Do the writes scale as well?
  • How much administration will the system require?
  • If its open source, is there a healthy community?
  • How much time and effort would we have to expend to deploy and integrate it?
  • Does it use technology which we know we can work with? *… and so on.

Asking these questions narrowed down our choices dramatically. Everything but Cassandra was ruled out by those questions. Given that it seemed to be our best choice, we went about testing its functionality (“can we reasonably model our data in this system?”) and load testing.

The load testing mostly focused on the write-path. In the medium/long term we’d like to be able to run without a cache in front of Cassandra, but for now we have plenty of memcache capacity and experience with scaling traffic that way.

MyNoSQL: If you draw a line, what were the top reasons for going with Cassandra?

Ryan King:

  • No single points of failure
  • Highly scalable writes (we have highly variable write traffic)
  • A healthy and productive open source community

MyNoSQL: Will Cassandra completely replace the current solution?

Ryan King: Over time, yes. We’re currently moving our largest (and most painful to maintain) table — the statuses table, which contains all tweets and retweets. After this we’ll start putting some new projects on Cassandra and migrating other tables.

MyNoSQL: How do you plan to migrate existing data?

Ryan King: We have a nice system for dynamically controlling features on our site. We commonly use this to roll out new features incrementally across our user base. We use the same system for rolling out new infrastructure.

So to roll out the new data store we do this:

  1. Write code that can write to Cassandra in parallel to Mysql, but keep it disabled by the tool I mentioned above
  2. Slowly turn up the writes to Cassandra (we can do this by user groups “turn this feature on for employees only” or by percentages “turn this feature on for 1.2% of users”)
  3. Find a bug :)
  4. Turn the feature off
  5. Fix the bug and deploy
  6. GOTO #2

Eventually we get to a point where we’re doing 100% doubling of our writes and comfortable that we’re going to stay there. Then we:

  1. Take a backup from the mysql databases
  2. Run an importer that imports the data to cassandra

    Some side notes here about importing. We were originally trying to use the BinaryMemtable[2] interface, but we actually found it to be too fast — it would saturate the backplane of our network. We’ve switched back to using the Thrift interface for bulk loading (and we still have to throttle it). The whole process takes about a week now. With infinite network bandwidth we could do it in about 7 hours on our current cluster.

  3. Once the data is imported we start turning on real read traffic to Cassandra (in parallel to the mysql traffic), again by user groups and percentages.

  4. Once we’re satisfied with the new system (we’re using the real production traffic with instrumentation in our application to QA the new datastore) we can start turning down traffic to the mysql databases.

A philosophical note here — our process for rolling out new major infrastructure can be summed up as “integrate first, then iterate”. We try to get new systems integrated into the application code base as early in their development as possible (but likely only activated for a small number of people). This allows us to iterate on many fronts in parallel: design, engineering, operations, etc.

MyNoSQL: Please include anything I’ve missed.

Ryan King: I can’t really think of anything else.

MyNoSQL: Thank you very much!


Help me grow the NoSQL community!
The nice buttons will make it easy to spread the NoSQL knowledge.
Share your thoughts and experience with a comment.
Ranking an article will make it easier for others to understand the NoSQL trends.
In the end, MyNoSQL will be more usefull to everyone. Thank you!


  1. nakajijapan reblogged this from nosql
  2. abrahamwilliams reblogged this from nosql
  3. nosql posted this