You can use multiple caching methods/mechanisms, though ultimately, the benefits will only shine as a result of performance testing. Doing a a few dozen page refreshes isn't going to provide the proper metrics to distinguish whether or not one is better than the other.
I have worked with clients that make full use of multiple forms of caching and, at the same time, I've worked with numerous others where multiple forms of caching is overkill and simply bogs down the server.
What works really well on a Droplet with 4-8 CPU's and 12-16GB of RAM may not work all that well on a Droplet with 512MB-1GB RAM and 1-2 CPU's. The same applies for any configuration between that or beyond.
In most cases, I recommend distributing your load across multiple servers. This means that your web server (NGINX/Apache/Caddy), Database Server (MySQL/MariaDB/Percona) and Caching Server (Redis/Memcached) are independent of one another and communicate over the private network that is provided by DigitalOcean while at the same time, each is configured with a firewall (such as
ufw on Ubuntu) which blocks public requests to the database and caching server (thus requiring that you use the private IP to communicate).
In such a case, I'd recommend Redis instead of NGINX FastCGI Caching due to the reasons I mentioned above (i.e. cross-server communication).
It sounds complicated, but a three-server cluster like the above-mentioned isn't all that hard to setup and manage -- and it allows room for growth and expansion for a single site or multiple.
Note: You can run free performance / load testing by creating a free account over at Loader.io (URL below). It'll allow you to send up to 10,000 clients to one target (for instance http://yourdomain.ext) which will give you an indication of how well each performs. Simply setup FastCGI Caching, run a test, remove FastCGI caching and enable Redis, run a test, and then compare the two.