Question

Server Specs 1 Core Equivalent to

Posted October 26, 2012 43.9k views
Can you clarify what does 1 Core equate to? Is it 100% of a Core in a Quad Core machine or is it shared with multiple Droplets? If so how can we know what % of Cores are allotted to what Droplets?
1 comment
  • DigitalOcean, I love you at first sight. And the more comments I see, the more reasons I see to be with you!

    You got very honest and technical staff and the objective is not to please your customers on wordings but to make us enjoy the most cost effective solution on the Earth! I appreciate the pricing model of DO because it is the only cloud service that successfully passed my doubtful barrier about how much money will be spent over my pity credit card and slow pacing testing cycle! I want to try out VPS but also want to keep it cheap and would do it occasionally and wow DO is the perfect solution.

    And hey doug thanks for the starve check above. I’ve found that Cloudlook can do something similar and they provide statistics across various cloud providers over there. Guess what, I’m wondering if Cloudlook has any relationship with DO.

    I am a DOer and I am trying to be a doer! Wish you all’s well that ends well.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
Submit an Answer
32 answers
It is strictly a logical representation the rest is determined by the hypervisor and what is available during execution time.
Hi Andrew,

Unfortunately that's not possible because the only way to really gauge performance on a system is to relate it to a real world application so you would need to know how much load you are currently running on what kind of a CPU and if you have the entire physical processor or otherwise.

Then you have to add in the number of logical cores you have and whether or not the applications that you are running can take advantage of having additional cores and how that helps with the load.

Then we are always going to be constantly purchasing newer CPUs which means relaying all of the information back to the customers and creating calculations between the virtual allocation they receive vs the physical infrastructure that runs it and also adding in the number of logical cores.

In theory it works, but when you look at Amazon and their EC2 compute units it really just makes it more difficult to understand what needs to be purchased. It's easier to actually spin up a server and get some real time metrics. We will be incorporating those into our control panel but for now something like NewRelic or CopperEgg is great in providing that information.

At the end of the day no host that provides any kind of a guarantee without restricting your resources directly to physical hardware, such as only providing the number of logical cores available directly mapped to the physical cores of the hypervisor is really going to provide a good value. But then of course you are losing the power of the cloud, which is that often times resources are going unutilized and you are able to grab those for your consumption.

So its just fundamentally different, if you want to use the cloud then you are sharing resources which means sometimes you'll have a noisy neighbor but other times you'll have more capacity that can deal with a spike effectively, if you want to go back to a dedicated system where it's just your resources, then you wont receive the additional capacity that's laying dormant.

Really comes down to whether you want to have a guaranteed number that is so low that provisioning according to it you would probably be better off grabbing a dedicated server or be able to deploy your application and let the real world usability dictate your compute needs?
Unfortunately this is inherent in the design of any cloud service, even taking the case of Amazon which provides ECU as a guidance still runs into the noisy neighbor problem.
There's no avoidance on the issue, the simple answer is that unlike RAM which and Disk which are clearly segmented CPU is not clearly segmented on any cloud provider. That is true for Linode, AWS, Rackspace and everyone else.

There are some exceptions to this, but that is only when a VM is literally tied to a physical core in which case you receive absolutely no burst.

We provision VMs by mixing the sizes on a single hypervisor, because each VM has a different number of logical CPUs this creates a mix on each hypervisor of the number of logical cores. From what we've seen this distribution has allowed customers to spike their utilization when needed and provided a stable base of overall performance the rest of the time for customers and we haven't received many complaints of a noisy neighbor CPU issue, but if someone sees that on their droplet please open up a ticket and we'll troubleshoot it.
Hi,

Thanks for your feedback.

Unfortunately CPU is not as static a measure as RAM or HDD which is why those things are segmented and simpler to grok in how they are allocated. CPU is much more dynamic.

We provide the number of logical cores that each plan comes with to help customers understand how much parallel processing they need and can utilize but outside of that every provider that doesn't directly tie a specific physical core to the virtual plan they are selling employ some sort of balancing.

This is true of Linode and Amazon which is also why they do not provide that information expressly. You can certainly reverse compute it with formulas and guesses.

But more importantly is that instead of going by any of these published or interpreted numbers it's so much easier to simply spin up a server and test performance.

This is because in most other clouds that aren't operating on SSD only don't discount how much performance you lose over disk contention. In fact many times that you see high CPU you may actually be waiting on disk IO which will come up as higher CPU utilization simply because you are waiting on a read or write.

So it's really much simpler to spin up the same spend on different clouds ($20 on DO vs $20 on Linode vs $20 on Amazon) and run a benchmark across them and just choose whichever performance provides the best bang for your buck.

Thanks!

moisey has missed the point, but we can infer a lot based on his responses.

Getting burst capacity for free sometimes is a selling point for batch processing, or any other workload whose capacity can fluctuate widely over an extended period so long as it evens out over time. Emphasizing the “value” of the compute cycles based on the high disk i/o suggests this is a conversation moisey is accustomed to having.

But we are asking about worst-case performance; clearly we are concerned with capacity at a very particular time, such as at peak traffic for a service with an SLA. Fuzzy guarantees can work out for SLAs when coupled with elastic auto-scaling. E.g. the noisy neighbor is noisy for 1 hour, during which you pay for a second instance on cooler hardware, but then later you get free capacity on your single instance and make up the difference.

DigitalOcean has no auto-scaling offering, leaving a customer to either A) implement their own auto-scaling mechanisms using the provided APIs or B) provision enough compute that they’ll hit their SLAs even in the worst case.

Option A) is both measurably (man hours and talent) and immeasurably (maturing a solution that only exposes flaws at the worst possible time) expensive enough that few shops are in a position to tackle it. Even for those, I don’t see a differentiator that makes DO attractive (e.g. API compatibility, reserved capacity, something else), and I don’t hear moisey steering us in that direction.

Option B) is what we are asking about. No answer means no guarantee, which in the worst case means zero capacity. If worst-case capacity for a single VPS is zero, an infinity of VPSes is still zero capacity. Any guarantee would be better than this, but DO isn’t offering one.

So reading between the lines, DO is:
1- an eligible, inexpensive solution for batch / offline processing*
2- not an eligible solution for response-time sensitive jobs
3- trying hard not to come right out and admit #2

3 might be explained as them stalling for time while they create an offering for response-time sensitive workloads. Or perhaps stalling while they refine their market strategy to either target or ignore those workloads.

Or, and I find this more likely, they don’t actually have a strategy beyond “be the cheapest / best value-per-dollar VPS provider and see who wants to use us”… which makes DigitalOcean perfect for running Minecraft or a small website, and iffy for anything more sophisticated than that.

(*) There are better value offerings out there for some specialized batch processing, e.g. video transcription.

Developers among you might want to try out a little tool I developed to detect CPU starvation, on github.

What it does is, on one or more cores (can be overridden, defaults to all cpus), do an atomic increment of a value in memory (on its own cache line) as often as possible. Another thread periodically checks and resets the number, and prints it on the screen. No sleeps are used, so the operating system won’t think you are idling or spinning.

If you watch it run for a while, you can note how much variation you see in the performance. You can import it to a spreadsheet and view it as a chart.

For what it’s worth, I tested a digitalocean 1-core 1GB machine, and while I did see a little variation, probably from neighbours, it was almost negligible. I was pleased.

You receive a logical core not a physical core which is similar to most other cloud providers.

Depending on the availability of resources this allows a virtual server to use up to the entire physical core when available.
Logical Core - what %age is guaranteed at a minimum?
Hi Raiyu and others at Digital Ocean,

Is there still no minimum guaranteed level of CPU performance? I could get all of a Xeon core or be totally starved? It would be great to get solid lower bound on physical CPU allocation even if it was indeed low, so we know what to expect.

For instance, I would like to know that my virtual CPUs will get at least x time slices on the CPU in every second, a definition of how small that slice was, and the specification of the lowest-end physical CPU in your cluster. This would let me calculate my worst-case expectation.

For most applications, the worst case is much more important than the best case or the average. The good news for you guys is that this information will probably cause your customers to over-provision rather than move away, as you are probably still going to be a well-priced option.

Thanks for any information you can offer.

Best,
Andrew Cox
Previous 1 2 3 4 Next