I’ve worked with WordPress for years now and there’s really not a concrete solution – it’s a process of trial and error. While NGINX is, IMO, a better web server than Apache and running PHP-FPM over PHP modules is more efficient, moving to NGINX won’t be a magic fix-all until you find the bottleneck that is causing the issue(s). Maybe it’s Apache, maybe it’s MySQL or perhaps it’s a series of poorly coded WordPress plugins (and there’s quite a few!).
Ultimately, you need to spend some time looking over your logs (Apache, MySQL, PHP, WordPress Error Logs etc) and see if you can find a place to start. Since WordPress is database driven, I would start by looking over your MySQL configuration and use a tool such as MySQL Tuner to see what it reports. MySQL Tuner will give you some insight as to what may be causing some issues on the database end in a pretty well laid out format.
As far as caching, Redis is excellent, though the default is set to use only 64MB’s of RAM (unless the configuration is modified, of course). With your traffic levels, 64MB’s would most likely do more harm than good. To really benefit from Redis, you need to install it on its own droplet (i.e. Redis only - no web server, no database server etc). Given your traffic levels, 2-3 512MB-1GB droplets would be ideal (pending, of course, that the WordPress plugin allows you to configure multiple servers - otherwise I’d shoot for a 1-2GB since instance).
To expand on caching, even SSD’s are going to see high I/O with higher traffic levels until you make the move to separate content from database, though in the process, ideally, you should offload your cache to a RAM Disk instead of serving the cached files from disk. This will alleviate some of the load and RAM is still faster than SSD’s. The Ram Disk would be setup on your web server (NGINX or Apache) and you’d simply mount the cache directory and serve. You’ll still be able to flush the cache (as long as you give the directory proper permissions – which should not be 0777 - i.e. world writable – aim for 0644), though you get the speed boost offered by RAM and, at the same time, alleviate the constant I/O requests for the cached (HTML?) files.
Further expanding on Redis, you could also setup one Redis instance for WordPress which handles the object cache and another to handle MySQL query caching. This will ensure that in X or XX seconds, if the same query is called, it’ll pull from Redis instead of hitting the database. After X or XX seconds, it’ll hit the database, cache the request and rinse and repeat.
If I had a basic “starting point” recommendation, I’d go with the following, simply based on your traffic levels as of your OP. Please keep in mind that this is a base recommendation. Since I am not able to see what’s actually going on, or confirm the bottleneck, this should only be used as a starting point to gauge what you may be better off with.
Setup #01 - Basic
1x - 4GB Droplet - NGINX & PHP-FPM
1x - 8GB Droplet - MariaDB
1x - 1GB Droplet - Redis (for WordPress / PHP Object Caching)
1x - 1GB Droplet - Redis (for MySQL Query Caching)
This setup does not offer redundancy at all, so a failure here means the same as a failure in your current situation - downtime. The benefit is, however, separation. With such as configuration, if you are still seeing high page loads, there’s an issue with your configuration and tweaking & tuning is the only way to jump over the hurdle.
Setup #02 - Advanced
1x - 2GB Droplet - Varnish Cache (sits in front of the NGINX web server)
2x - 4GB Droplets - NGINX & PHP-FPM (syncronized, used to load balance)
2x - 8GB Droplets - MariaDB (Master/Slave Setup)
2x - 2GB Droplets - Redis (same as above, though 2GB instances)
You could even toss in another 2GB NGINX or HAProxy server so that Varnish isn’t left to handle the load balancing and can focus on doing what it does best - caching. You could also look at Squid.
That said, all of this is subjective until we know what the real issue is.