Using Redis as a Twitter Ingest Buffer

By Jason Baumgartner | April 06, 2017

Small Photo
Photo
redis logo

Why use Redis?

 

Using Redis is a great way to buffer ingest operations in the event that there is a database issue or some other piece of the process fails. Redis is fast. Extremely fast. Up to hundreds of thousands of SET and GET operations per second fast. That’s just using one Redis server (Redis is single-threaded, so to distribute the load, multiple Redis servers can run simultaneously — one per core). For most raw data ingest, Redis won’t even break a sweat against data feeds as large as Twitter’s firehose service.

To put into perspective just how popular Twitter is, each second there is an average of 6,000 TPS (tweets per second). On August 3, 2013, a record was set by Japanese people tweeting 143,199 tweets in just one second. That’s a tremendous amount of data. However, given the proper configuration including a recent multi-core Xeon processor (E5-16xx v3 or E5-25xx v3) and high-quality memory, one server running Redis could have handled that load. Although a 10 Gbps network adapter would probably be needed in such a situation since that many tweets probably consumed up to hundreds of megabytes per second even with compression. That heavy of a load from Twitter’s streaming API is very atypical, however.

The Basic Setup

The first part of our ingest process is getting the data from Twitter. Twitter has a few public streams that provide approximately 1% of the bandwidth of their full firehose stream. Back in 2010, Twitter partnered with Gnip and now Gnip is one of the few authorized companies allowed to resell Twitter data to consumers. How are we ingesting up to 5% of the full stream? It appears to possibly be a bug with Twitter’s filtering methods, but by following the top 5,000 Twitter users at once, we are ingesting far more than Twitter’s sample stream. Anyway, getting back to the process. We hook up to the Twitter stream using a Perl script that establishes a connection using compression to receive tweets in JSON format. Each tweet is then uncompressed from the Zlib compression stream and then re-compressed using Google’s LZ4 compression scheme which is very fast. Each compressed is then put inside a Redis list (using rpush) and later retrieved (hopefully very quickly) by another script using the Redis lrange command to request a block of tweets at once. Once those tweets have been successfully processed, indexed and stored, the script will send an ltrim command to Redis to remove the previous processed tweets. This works very well using a single-threaded approach and could be scaled up to multiple threads by using some more advanced methods. I may cover those methods in a future article if there is interest.

Dealing with Gremlins

There is always multiple ways for a system to fail. One of the first things that should be discussed at the beginning of project is what types of failures are acceptable and which failures are showstoppers. For our purposes, we can tolerate losing some tweets. There’s a few ways that could happen. The connection to Twitter could disconnect (network issues), Twitter itself could have a major failure (unlikely given the amount of redundancy they have built into their network), there could be a hardware failure or power failure. Most of those failure points are rare events. The most tweets that I have seen ingested in one second by our system was close to 1,500 tweets. Running stress tests, we can handle approximately 5,000 tweets per second until the Redis buffer begins to grow faster than we can pull data from it. However, we’re only using a single-thread approach at the moment. We could probably push that up to 20,000+ tweets per second with a multi-threaded solution. These approaches utilize only one server.

Topics

Share