Does the linux "time" command measure user time (not real time or system time) correctly when using vCPUs (ie. on DigitalOcean)?

We’re deploying a CMS instance (contest management system, Github repo link). To accurately measure whether or not a given solution to a task is fast enough, the system would need to accurately measure time even when the CPU might be used by another droplet for a short time in the middle of the execution of a solution. So, I’d like to know whether or not the fact that droplets run on virtual CPU cores affects the linux “time” command.


Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!