I will show you a technique to impose a rate limit (aka API Throttling) on a Ruby Web Service. I will be using Rack middleware so you can use this no matter what Ruby Web Framework you are using, as long as it is Rack-compliant.
This middleware has recently been gemmified, you can install the latest gem using:
sudo gem install jduff-api-throttling
If you prefer to have the latest source it can be found at http://github.com/jduff/api-throttling/tree (this is a fork of http://github.com/dambalah/api-throttling/tree with a number of recent changes)
In your rack application simply use the middleware and pass it some options
use ApiThrottling, :requests_per_hour => 3
This will setup throttling with a limit of 3 requests per hour and will use a Redis cache to keep track of it. By default Rack::Auth::Basic is used to limit the requests on a per user basis.
A number of options can be passed to the middleware so it can be configured as needed for your stack.
:cache=>:redis # :memcache, :hash are supported. you can also pass in an instance of those caches, or even Rails.cache
:auth=>false # if your middleware is doing authentication somewhere else
:key=>Proc.new{|env,auth| "#{env['PATH_INFO']}_#{Time.now.strftime("%Y-%m-%d-%H")}" } # to customize how the cache key is generated
An example using all the options might look something like this:
CACHE = MemCache.new use ApiThrottling, :requests_per_hour => 100, :cache=>CACHE, :auth=>false, :key=>Proc.new{|env,auth| "#{env['PATH_INFO']}_#{Time.now.strftime("%Y-%m-%d-%H")}" }
This will limit requests to 100 per hour per url ('/home' will be tracked separately from '/users') keeping track by storing the counts with MemCache.
There are plenty of great resources to learn the basic of Rack so I will not be explaining how Rack works here but you will need to understand it in order to follow this post. I highly recommend watching the three Rack screencasts from Remi to get started with Rack.
First, make sure you have the thin webserver installed.
sudo gem install thin
We are going to use the following 'Hello World' Rack application to test our API Throttling middleware.
use Rack::ShowExceptions use Rack::Lint run lambda {|env| [200, { 'Content-Type' => 'text/plain', 'Content-Length' => '12'}, ["Hello World!"] ] }
Save this code in a file called config.ru and then you can run it with the thin webserver, using the following command:
thin --rackup config.ru start
Now you can open another terminal window (or a browser) to test that this is working as expected:
curl -i http://localhost:3000
The -i option tells curl to include the HTTP-header in the output so you should see the following:
$ curl -i http://localhost:3000 HTTP/1.1 200 OK Content-Type: text/plain Content-Length: 12 Connection: keep-alive Server: thin 1.0.0 codename That's What She Said Hello World!
At this point, we have a basic rack application that we can use to test our rack middleware. Now let's get started.
We need a way to memorize the number of requests that users are making to our web service if we want to limit the rate at which they can use the API. Every time they make a request, we want to check if they've gone past their rate limit before we respond to the request. We also want to store the fact that they've just made a request. Since every call to our web service requires this check and memorization process, we would like this to be done as fast as possible.
This is where Redis comes in. Redis is a super-fast key-value database that we've highlighted in a previous blog post. It can do about 110,000 SETs per second, about 81,000 GETs per second. That's the kind of performance that we are looking for since we would not like our 'rate limiting' middleware to reduce the performance of our web service.
Install the redis ruby client library with
sudo gem install ezmobius-redis-rb
We are assuming that the web service is using HTTP Basic Authentication. You could use another type of authentication and adapt the code to fit your model.
Our rack middleware will do the following:
- For every request received, increment a key in our database. The key string will consists of the authenticated username followed by a timestamp for the current hour. For example, for a user called joe, the key would be: joe_2009-05-01-12
- If the value of that key is less than our 'maximum requests per hour limit', then return an HTTP Response with a status code of 503, indicating that the user has gone over his rate limit.
- If the value of the key is less than the maximum requests per hour limit, then allow the user's request to go through.
Redis has an atomic INCR command that is the perfect fit for our use case. It increments the key value by one. If the key does not exist, it sets the key to the value of "0" and then increments it. Awesome! We don't even need to write our own logic to check if the key exists before incrementing it, Redis takes care of that for us.
r = Redis.new key = "#{auth.username}_#{Time.now.strftime("%Y-%m-%d-%H")}" r.incr(key) return over_rate_limit if r[key].to_i > @options[:requests_per_hour]
If our redis-server is not running, rather than throwing an error affecting all our users, we will let all the requests pass through by catching the exception and doing nothing. That means that if your redis-server goes down, you are no longer throttling the use of your web service so you need to make sure it's always running (using monit or god, for example).
Finally, we want anyone who might use this Rack middleware to be able to set their limit via the requests_per_hour option.
The full code for our middleware is below. You can also find it at github.com/dambalah/api-throttling.
require 'rubygems' require 'rack' require 'redis' class ApiThrottling def initialize(app, options={}) @app = app @options = {:requests_per_hour => 60}.merge(options) end def call(env, options={}) auth = Rack::Auth::Basic::Request.new(env) if auth.provided? return bad_request unless auth.basic? begin r = Redis.new key = "#{auth.username}_#{Time.now.strftime("%Y-%m-%d-%H")}" r.incr(key) return over_rate_limit if r[key].to_i > @options[:requests_per_hour] rescue Errno::ECONNREFUSED # If Redis-server is not running, instead of throwing an error, we simply do not throttle the API # It's better if your service is up and running but not throttling API, then to have it throw errors for all users # Make sure you monitor your redis-server so that it's never down. monit is a great tool for that. end end @app.call(env) end def bad_request body_text = "Bad Request" [ 400, { 'Content-Type' => 'text/plain', 'Content-Length' => body_text.size.to_s }, [body_text] ] end def over_rate_limit body_text = "Over Rate Limit" [ 503, { 'Content-Type' => 'text/plain', 'Content-Length' => body_text.size.to_s }, [body_text] ] end end
To use it on our 'Hello World' rack application, simply add it with the use keyword and the :requests_per_hour option:
require 'api_throttling' use Rack::Lint use Rack::ShowExceptions use ApiThrottling, :requests_per_hour => 3 run lambda {|env| [200, {'Content-Type' => 'text/plain', 'Content-Length' => '12'}, ["Hello World!"] ] }
That's it! Make sure your redis-server is running on port 6379 and try making calls to your api with curl. The first 3 calls will be succesful but the next ones will block because you've reached the limit that we've set:
$ curl -i http://joe@localhost:3000 HTTP/1.1 200 OK Content-Type: text/plain Content-Length: 12 Connection: keep-alive Server: thin 1.0.0 codename That's What She Said Hello World! $ curl -i http://joe@localhost:3000 HTTP/1.1 200 OK Content-Type: text/plain Content-Length: 12 Connection: keep-alive Server: thin 1.0.0 codename That's What She Said Hello World! $ curl -i http://joe@localhost:3000 HTTP/1.1 200 OK Content-Type: text/plain Content-Length: 12 Connection: keep-alive Server: thin 1.0.0 codename That's What She Said Hello World! $ curl -i http://joe@localhost:3000 HTTP/1.1 503 Service Unavailable Content-Type: text/plain Content-Length: 15 Connection: keep-alive Server: thin 1.0.0 codename That's What She Said Over Rate Limit