Developers among you might want to try out a little tool I developed to detect CPU starvation, on github.
What it does is, on one or more cores (can be overridden, defaults to all cpus), do an atomic increment of a value in memory (on its own cache line) as often as possible. Another thread periodically checks and resets the number, and prints it on the screen. No sleeps are used, so the operating system won't think you are idling or spinning.
If you watch it run for a while, you can note how much variation you see in the performance. You can import it to a spreadsheet and view it as a chart.
For what it's worth, I tested a digitalocean 1-core 1GB machine, and while I did see a little variation, probably from neighbours, it was almost negligible. I was pleased.
It worries me that even after repeated requests, DO has not yet provided even an estimate for allocated physical CPU resources.
@moisey - As customers, we understand and appreciate that the general answer is "it depends", with a hefty dose of "servers improve over time". Your suggestion to spin up instances across providers and directly compare a benchmark of our choice is good, however this only estimates current performance, and if run for a bit, average performance.
Earlier commenters on this thread, and myself, are asking about worst case performance, which cannot be determined by any benchmark runnable from inside a virtual system.
There are two options:
a) There is no minimum CPU guarantee. This implies that "noisy neighbors" can crush your droplet's performance.
b) There is a minimum CPU guarantee, likely in terms of minimum slices.
All we're asking for is a straight answer - is there a minimum CPU guarantee, in any way, shape or form? No matter how this is measured or implemented (eg, percentage of a physical core), any answer is better than no answer.
This is still an open debate and I still find Digital Ocean's replies absolutely weak to say the least.
Is it really that hard to give the much awaited answer? Because if you can't, you are either admitting that you are either heavily overselling (not necessarily a bad thing if you can keep your stuff together - but say it already if this is the case!) or you don't know the answer, which makes this situation terribly embarrassing.
We are still waiting for a reply.
moisey has missed the point, but we can infer a lot based on his responses.
Getting burst capacity for free sometimes is a selling point for batch processing, or any other workload whose capacity can fluctuate widely over an extended period so long as it evens out over time. Emphasizing the "value" of the compute cycles based on the high disk i/o suggests this is a conversation moisey is accustomed to having.
But we are asking about worst-case performance; clearly we are concerned with capacity at a very particular time, such as at peak traffic for a service with an SLA. Fuzzy guarantees can work out for SLAs when coupled with elastic auto-scaling. E.g. the noisy neighbor is noisy for 1 hour, during which you pay for a second instance on cooler hardware, but then later you get free capacity on your single instance and make up the difference.
DigitalOcean has no auto-scaling offering, leaving a customer to either A) implement their own auto-scaling mechanisms using the provided APIs or B) provision enough compute that they'll hit their SLAs even in the worst case.
Option A) is both measurably (man hours and talent) and immeasurably (maturing a solution that only exposes flaws at the worst possible time) expensive enough that few shops are in a position to tackle it. Even for those, I don't see a differentiator that makes DO attractive (e.g. API compatibility, reserved capacity, something else), and I don't hear moisey steering us in that direction.
Option B) is what we are asking about. No answer means no guarantee, which in the worst case means zero capacity. If worst-case capacity for a single VPS is zero, an infinity of VPSes is still zero capacity. Any guarantee would be better than this, but DO isn't offering one.
So reading between the lines, DO is:
1- an eligible, inexpensive solution for batch / offline processing*
2- not an eligible solution for response-time sensitive jobs
3- trying hard not to come right out and admit #2
Or, and I find this more likely, they don't actually have a strategy beyond "be the cheapest / best value-per-dollar VPS provider and see who wants to use us"... which makes DigitalOcean perfect for running Minecraft or a small website, and iffy for anything more sophisticated than that.
(*) There are better value offerings out there for some specialized batch processing, e.g. video transcription.
It's been ~3 years, and still DigitalOcean has not gave us an answer yet, I find this very lacking of customer support.
It's been answered. Unfortunately no one enjoys what the answer means for them. @stephenayotte hit the nail on the head.
To DO, be careful! This is the top hit on Google, and the implication it makes on your company is dreadful. I'm coming from a shared host, I've rolled a small droplet to test a startup business with low priority and little consequence. However, reading this thread I'll be taking myself and 40+ clients elsewhere, where I know what I and my clients are receiving.