Server Specs 1 Core Equivalent to

  • Posted October 26, 2012

Can you clarify what does 1 Core equate to? Is it 100% of a Core in a Quad Core machine or is it shared with multiple Droplets? If so how can we know what % of Cores are allotted to what Droplets?


DigitalOcean, I love you at first sight. And the more comments I see, the more reasons I see to be with you!

You got very honest and technical staff and the objective is not to please your customers on wordings but to make us enjoy the most cost effective solution on the Earth! I appreciate the pricing model of DO because it is the only cloud service that successfully passed my doubtful barrier about how much money will be spent over my pity credit card and slow pacing testing cycle! I want to try out VPS but also want to keep it cheap and would do it occasionally and wow DO is the perfect solution.

And hey doug thanks for the starve check above. I’ve found that Cloudlook can do something similar and they provide statistics across various cloud providers over there. Guess what, I’m wondering if Cloudlook has any relationship with DO.

I am a DOer and I am trying to be a doer! Wish you all’s well that ends well.

Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Developers among you might want to try out a little tool I developed to detect CPU starvation, on github.

What it does is, on one or more cores (can be overridden, defaults to all cpus), do an atomic increment of a value in memory (on its own cache line) as often as possible. Another thread periodically checks and resets the number, and prints it on the screen. No sleeps are used, so the operating system won’t think you are idling or spinning.

If you watch it run for a while, you can note how much variation you see in the performance. You can import it to a spreadsheet and view it as a chart.

For what it’s worth, I tested a digitalocean 1-core 1GB machine, and while I did see a little variation, probably from neighbours, it was almost negligible. I was pleased.

moisey has missed the point, but we can infer a lot based on his responses.

Getting burst capacity for free sometimes is a selling point for batch processing, or any other workload whose capacity can fluctuate widely over an extended period so long as it evens out over time. Emphasizing the “value” of the compute cycles based on the high disk i/o suggests this is a conversation moisey is accustomed to having.

But we are asking about worst-case performance; clearly we are concerned with capacity at a very particular time, such as at peak traffic for a service with an SLA. Fuzzy guarantees can work out for SLAs when coupled with elastic auto-scaling. E.g. the noisy neighbor is noisy for 1 hour, during which you pay for a second instance on cooler hardware, but then later you get free capacity on your single instance and make up the difference.

DigitalOcean has no auto-scaling offering, leaving a customer to either A) implement their own auto-scaling mechanisms using the provided APIs or B) provision enough compute that they’ll hit their SLAs even in the worst case.

Option A) is both measurably (man hours and talent) and immeasurably (maturing a solution that only exposes flaws at the worst possible time) expensive enough that few shops are in a position to tackle it. Even for those, I don’t see a differentiator that makes DO attractive (e.g. API compatibility, reserved capacity, something else), and I don’t hear moisey steering us in that direction.

Option B) is what we are asking about. No answer means no guarantee, which in the worst case means zero capacity. If worst-case capacity for a single VPS is zero, an infinity of VPSes is still zero capacity. Any guarantee would be better than this, but DO isn’t offering one.

So reading between the lines, DO is: 1- an eligible, inexpensive solution for batch / offline processing* 2- not an eligible solution for response-time sensitive jobs 3- trying hard not to come right out and admit #2

#3 might be explained as them stalling for time while they create an offering for response-time sensitive workloads. Or perhaps stalling while they refine their market strategy to either target or ignore those workloads.

Or, and I find this more likely, they don’t actually have a strategy beyond “be the cheapest / best value-per-dollar VPS provider and see who wants to use us”… which makes DigitalOcean perfect for running Minecraft or a small website, and iffy for anything more sophisticated than that.

(*) There are better value offerings out there for some specialized batch processing, e.g. video transcription.

Hi, <br> <br>Thanks for your feedback. <br> <br>Unfortunately CPU is not as static a measure as RAM or HDD which is why those things are segmented and simpler to grok in how they are allocated. CPU is much more dynamic. <br> <br>We provide the number of logical cores that each plan comes with to help customers understand how much parallel processing they need and can utilize but outside of that every provider that doesn’t directly tie a specific physical core to the virtual plan they are selling employ some sort of balancing. <br> <br>This is true of Linode and Amazon which is also why they do not provide that information expressly. You can certainly reverse compute it with formulas and guesses. <br> <br>But more importantly is that instead of going by any of these published or interpreted numbers it’s so much easier to simply spin up a server and test performance. <br> <br>This is because in most other clouds that aren’t operating on SSD only don’t discount how much performance you lose over disk contention. In fact many times that you see high CPU you may actually be waiting on disk IO which will come up as higher CPU utilization simply because you are waiting on a read or write. <br> <br>So it’s really much simpler to spin up the same spend on different clouds ($20 on DO vs $20 on Linode vs $20 on Amazon) and run a benchmark across them and just choose whichever performance provides the best bang for your buck. <br> <br>Thanks!

There’s no avoidance on the issue, the simple answer is that unlike RAM which and Disk which are clearly segmented CPU is not clearly segmented on any cloud provider. That is true for Linode, AWS, Rackspace and everyone else. <br> <br>There are some exceptions to this, but that is only when a VM is literally tied to a physical core in which case you receive absolutely no burst. <br> <br>We provision VMs by mixing the sizes on a single hypervisor, because each VM has a different number of logical CPUs this creates a mix on each hypervisor of the number of logical cores. From what we’ve seen this distribution has allowed customers to spike their utilization when needed and provided a stable base of overall performance the rest of the time for customers and we haven’t received many complaints of a noisy neighbor CPU issue, but if someone sees that on their droplet please open up a ticket and we’ll troubleshoot it.

Unfortunately this is inherent in the design of any cloud service, even taking the case of Amazon which provides ECU as a guidance still runs into the noisy neighbor problem.

Hi Andrew, <br> <br>Unfortunately that’s not possible because the only way to really gauge performance on a system is to relate it to a real world application so you would need to know how much load you are currently running on what kind of a CPU and if you have the entire physical processor or otherwise. <br> <br>Then you have to add in the number of logical cores you have and whether or not the applications that you are running can take advantage of having additional cores and how that helps with the load. <br> <br>Then we are always going to be constantly purchasing newer CPUs which means relaying all of the information back to the customers and creating calculations between the virtual allocation they receive vs the physical infrastructure that runs it and also adding in the number of logical cores. <br> <br>In theory it works, but when you look at Amazon and their EC2 compute units it really just makes it more difficult to understand what needs to be purchased. It’s easier to actually spin up a server and get some real time metrics. We will be incorporating those into our control panel but for now something like NewRelic or CopperEgg is great in providing that information. <br> <br>At the end of the day no host that provides any kind of a guarantee without restricting your resources directly to physical hardware, such as only providing the number of logical cores available directly mapped to the physical cores of the hypervisor is really going to provide a good value. But then of course you are losing the power of the cloud, which is that often times resources are going unutilized and you are able to grab those for your consumption. <br> <br>So its just fundamentally different, if you want to use the cloud then you are sharing resources which means sometimes you’ll have a noisy neighbor but other times you’ll have more capacity that can deal with a spike effectively, if you want to go back to a dedicated system where it’s just your resources, then you wont receive the additional capacity that’s laying dormant. <br> <br>Really comes down to whether you want to have a guaranteed number that is so low that provisioning according to it you would probably be better off grabbing a dedicated server or be able to deploy your application and let the real world usability dictate your compute needs?

It is strictly a logical representation the rest is determined by the hypervisor and what is available during execution time.

Created an account just to let DO know they have lost my business due to this thread, which is poignant enough to invoke complete disregard all the positive things I have seen on the forums.

It’s been answered. Unfortunately no one enjoys what the answer means for them. @stephenayotte hit the nail on the head.

To DO, be careful! This is the top hit on Google, and the implication it makes on your company is dreadful. I’m coming from a shared host, I’ve rolled a small droplet to test a startup business with low priority and little consequence. However, reading this thread I’ll be taking myself and 40+ clients elsewhere, where I know what I and my clients are receiving.

It’s been ~3 years, and still DigitalOcean has not gave us an answer yet, I find this very lacking of customer support.