In my earlier blog post we saw how using redis connection pools can help us improve performance in a multi-threaded/multi-process application like Rails. Now we will see another typical scenario where Redis is a bottleneck and how we can optimize it.

I was designing a caching system and ran into a problem where I wanted to delete a bunch of keys on some event and it turned out to be much slower than I expected.

Intuitively I assumed Redis would be fast enough to handle it with sub-millisecond response times. Turns out Redis uses client-server model and each command waits for response. So if you are sending each delete separately, the latency can add up quickly, more so if you are running Redis on a different server or using a managed offering.

Redis pipelines can be used in such cases to fire multiple commands at once and not wait for response for each individual command. It reads all replies finally at once. This also throughput (ops/sec).

Here's a simple benchmark on a local Redis instance:

require 'redis'
require 'benchmark'

REDIS = Redis.new

def set_data
  100_000.times.with_index do |key, i|
    REDIS.set("key-#{key}", i)
  end
end

Benchmark.bm do |x|
  set_data
  x.report('delete sequential') do
    100_000.times do |key|
      REDIS.del("key-#{key}")
    end
  end

  set_data
  x.report('delete pipeline') do
    REDIS.pipelined do |pipeline|
      100_000.times.with_index do |key, i|
        pipeline.del("key-#{key}", i)
      end
    end
  end
end

And the results are as expected - a 5x improvement in response times on localhost - and this will only improve significantly on a remote connection.

       user     system      total        real
delete sequential  2.235531   1.198560   3.434091 (  5.057582)
delete pipeline  1.057324   0.026759   1.084083 (  1.099450)

More details and performance charts on pipelines are available here.