SGr33n
By:
SGr33n

Nginx: FastCGI cache & Redis

July 6, 2016 6.5k views
Nginx Redis WordPress

Hi,
I'm going to install Redis on my Nginx server. I already configured FastCGI Cache, is Redis an alternative to FastCGI or the best is to run them both?

I thought they can run both but on the WordPress Helper plugin I can see that I can choice between Redis and FastCGI cache. Does this mean that they cannot run togheter?

Thanks!

4 Answers

The primary difference between these two is in what gets cached. Using Redis with WordPress will cache the results of common database queries and speed the display of a page. The FastCGI cache instead caches the whole page after it's been generated. Neither is necessarily better than the other as it's more a matter of what works best for your particular use. Doing a bit of searching turned up this article that provides a good comparison between the two solutions.

@SGr33n

The answer really depends on your specific needs. In either case, you need only use one or the other, not both.

FastCGI Cache is beneficial when you're in a single-server environment and you do not need to share the cache with other servers in your cluster. It can be slightly faster than Redis, depending on setup, but it's only beneficial to the server that is serving the requests and it's not accessible to beyond that server.

On the other hand, Redis can be setup to run along side NGINX, configured as an NGINX module and used by NGINX, and can be accessed by more than a single server. So if you grow to a cluster and the need to share cached data across multiple servers arises, simply connecting to Redis from the servers in the cluster provides you with that access.

Additionally, Redis can be installed to a completely separate server (i.e. a cache-only server) which allows you to separate your cache server from your web server. You can extend Redis in to a cluster pool as well and shard data across multiple servers and still be able to access it from the primary without having to connect to each one individually.

At the end of the day, if you only plan on running a single server, for simplicity, I'd stick with FastCGI Cache unless you have a true need for Redis. If you plan to setup a cluster of servers, such as:

  • a). 2x - Load Balancers (Handles Incoming Requests - Directs Them to Web Servers)
  • b). 2x - Web Servers (Mirrored)
  • c). 2x - Database Servers (Master/Slave Replication)
  • d). 2x - Caching Servers (Redis)

... Redis is a very viable option.

Note: The above is a very basic cluster and is an over-simplified example. There's a lot of config that goes in to setting that up, so it's not as simple as deploying 8x servers.

Thank you both @ryanpq & @jtittle ! I apreciate.
I just thought that I could install FastCGI Cache as Page Caching and Redis as Object Cache (in place of Memcached), is definetely this not a possible (good) option?

@SGr33n

You can use multiple caching methods/mechanisms, though ultimately, the benefits will only shine as a result of performance testing. Doing a a few dozen page refreshes isn't going to provide the proper metrics to distinguish whether or not one is better than the other.

I have worked with clients that make full use of multiple forms of caching and, at the same time, I've worked with numerous others where multiple forms of caching is overkill and simply bogs down the server.

What works really well on a Droplet with 4-8 CPU's and 12-16GB of RAM may not work all that well on a Droplet with 512MB-1GB RAM and 1-2 CPU's. The same applies for any configuration between that or beyond.

In most cases, I recommend distributing your load across multiple servers. This means that your web server (NGINX/Apache/Caddy), Database Server (MySQL/MariaDB/Percona) and Caching Server (Redis/Memcached) are independent of one another and communicate over the private network that is provided by DigitalOcean while at the same time, each is configured with a firewall (such as ufw on Ubuntu) which blocks public requests to the database and caching server (thus requiring that you use the private IP to communicate).

In such a case, I'd recommend Redis instead of NGINX FastCGI Caching due to the reasons I mentioned above (i.e. cross-server communication).

It sounds complicated, but a three-server cluster like the above-mentioned isn't all that hard to setup and manage -- and it allows room for growth and expansion for a single site or multiple.

Note: You can run free performance / load testing by creating a free account over at Loader.io (URL below). It'll allow you to send up to 10,000 clients to one target (for instance http://yourdomain.ext) which will give you an indication of how well each performs. Simply setup FastCGI Caching, run a test, remove FastCGI caching and enable Redis, run a test, and then compare the two.

https://loader.io/pricing

Have another answer? Share your knowledge.