Using the API w/ a Floating IP would be the best method as Load Balancing isn't really ideal if you're simply wanting to direct all traffic to a single server until it's down. You could at least keep the IP (if the Droplets are in the same DC), but that's just a suggestion (unless that's what you're already doing).
The options available for using NGINX as a Load Balancer are Round Robin, IP Hash, and Weighted. With Round Robin, it just connects in the order listed unless a server is down, while IP Hash attempts to send the user to the same server they first hit, and Weighted simply distributes requests based on the weights you assign to each server -- but it still distributes the requests.
That said, depending on the type of caching, if distributing requests is an issue, I'd look in to using a cache that doesn't require the same server -- if possible. Something like Redis or Memcached. That way you're not limited to caching on a single server and you could deploy one or more caching servers, allowing both to read/write to the master.
LB => Server01 | Server02 => Redis/Memcached
Ideally, I'd shoot for Redis is possible, though you can tinker with both and see which you like the best and which best suites the needs of your application. It is adding another server, or multiple servers to the mix, but as you scale, that's the direction you'll be heading.