In most cases, private floating IPs (or virtual ip - vip) are a relic of physical infrastructure where there are a lot of single points of failure, including the network interface, single disk, or power etc. In most virtualized environments, much of this is abstracted away these days. For example a guest migration between hosts can take a similar amount of time as a vip to deal with hardware issues without the hassle - often times without noticeable interruption depending on the application.
While virtualization does help in some cases, there still remains the common active/standby model which is still sometimes necessary though I argue (opinion) in most cases this is a tremendous waste of resources and is moving an engineering problem to an operational one. In most of these instances, a simple proxy or load balancer can fill this gap.
As an example, one of the most common use cases for an active/standby and vip being RDBMS such as MySQL, Postgres, SQL Server etc. While there are proprietary clustering solutions and complex layer 7 proxy solutions available for these databases that can even split reads/writes, a simple layer 4 configuration with haproxy can provide a simple interface for clients and replicas to reach the correct master server. This removes the often complicated burden of split brain detection etc from many servers to one.
Opinion: The use case for manually managing a vip is rapidly shrinking with containerized workloads and treating services as services, not a particular host/IP. Using internal load balancers should be more common than it is.