Server Specs 1 Core Equivalent to

Posted October 26, 2012 42.8k views
Can you clarify what does 1 Core equate to? Is it 100% of a Core in a Quad Core machine or is it shared with multiple Droplets? If so how can we know what % of Cores are allotted to what Droplets?
1 comment
  • DigitalOcean, I love you at first sight. And the more comments I see, the more reasons I see to be with you!

    You got very honest and technical staff and the objective is not to please your customers on wordings but to make us enjoy the most cost effective solution on the Earth! I appreciate the pricing model of DO because it is the only cloud service that successfully passed my doubtful barrier about how much money will be spent over my pity credit card and slow pacing testing cycle! I want to try out VPS but also want to keep it cheap and would do it occasionally and wow DO is the perfect solution.

    And hey doug thanks for the starve check above. I’ve found that Cloudlook can do something similar and they provide statistics across various cloud providers over there. Guess what, I’m wondering if Cloudlook has any relationship with DO.

    I am a DOer and I am trying to be a doer! Wish you all’s well that ends well.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

32 answers
It is strictly a logical representation the rest is determined by the hypervisor and what is available during execution time.
Hi Andrew,

Unfortunately that's not possible because the only way to really gauge performance on a system is to relate it to a real world application so you would need to know how much load you are currently running on what kind of a CPU and if you have the entire physical processor or otherwise.

Then you have to add in the number of logical cores you have and whether or not the applications that you are running can take advantage of having additional cores and how that helps with the load.

Then we are always going to be constantly purchasing newer CPUs which means relaying all of the information back to the customers and creating calculations between the virtual allocation they receive vs the physical infrastructure that runs it and also adding in the number of logical cores.

In theory it works, but when you look at Amazon and their EC2 compute units it really just makes it more difficult to understand what needs to be purchased. It's easier to actually spin up a server and get some real time metrics. We will be incorporating those into our control panel but for now something like NewRelic or CopperEgg is great in providing that information.

At the end of the day no host that provides any kind of a guarantee without restricting your resources directly to physical hardware, such as only providing the number of logical cores available directly mapped to the physical cores of the hypervisor is really going to provide a good value. But then of course you are losing the power of the cloud, which is that often times resources are going unutilized and you are able to grab those for your consumption.

So its just fundamentally different, if you want to use the cloud then you are sharing resources which means sometimes you'll have a noisy neighbor but other times you'll have more capacity that can deal with a spike effectively, if you want to go back to a dedicated system where it's just your resources, then you wont receive the additional capacity that's laying dormant.

Really comes down to whether you want to have a guaranteed number that is so low that provisioning according to it you would probably be better off grabbing a dedicated server or be able to deploy your application and let the real world usability dictate your compute needs?
Unfortunately this is inherent in the design of any cloud service, even taking the case of Amazon which provides ECU as a guidance still runs into the noisy neighbor problem.
There's no avoidance on the issue, the simple answer is that unlike RAM which and Disk which are clearly segmented CPU is not clearly segmented on any cloud provider. That is true for Linode, AWS, Rackspace and everyone else.

There are some exceptions to this, but that is only when a VM is literally tied to a physical core in which case you receive absolutely no burst.

We provision VMs by mixing the sizes on a single hypervisor, because each VM has a different number of logical CPUs this creates a mix on each hypervisor of the number of logical cores. From what we've seen this distribution has allowed customers to spike their utilization when needed and provided a stable base of overall performance the rest of the time for customers and we haven't received many complaints of a noisy neighbor CPU issue, but if someone sees that on their droplet please open up a ticket and we'll troubleshoot it.

Thanks for your feedback.

Unfortunately CPU is not as static a measure as RAM or HDD which is why those things are segmented and simpler to grok in how they are allocated. CPU is much more dynamic.

We provide the number of logical cores that each plan comes with to help customers understand how much parallel processing they need and can utilize but outside of that every provider that doesn't directly tie a specific physical core to the virtual plan they are selling employ some sort of balancing.

This is true of Linode and Amazon which is also why they do not provide that information expressly. You can certainly reverse compute it with formulas and guesses.

But more importantly is that instead of going by any of these published or interpreted numbers it's so much easier to simply spin up a server and test performance.

This is because in most other clouds that aren't operating on SSD only don't discount how much performance you lose over disk contention. In fact many times that you see high CPU you may actually be waiting on disk IO which will come up as higher CPU utilization simply because you are waiting on a read or write.

So it's really much simpler to spin up the same spend on different clouds ($20 on DO vs $20 on Linode vs $20 on Amazon) and run a benchmark across them and just choose whichever performance provides the best bang for your buck.


moisey has missed the point, but we can infer a lot based on his responses.

Getting burst capacity for free sometimes is a selling point for batch processing, or any other workload whose capacity can fluctuate widely over an extended period so long as it evens out over time. Emphasizing the “value” of the compute cycles based on the high disk i/o suggests this is a conversation moisey is accustomed to having.

But we are asking about worst-case performance; clearly we are concerned with capacity at a very particular time, such as at peak traffic for a service with an SLA. Fuzzy guarantees can work out for SLAs when coupled with elastic auto-scaling. E.g. the noisy neighbor is noisy for 1 hour, during which you pay for a second instance on cooler hardware, but then later you get free capacity on your single instance and make up the difference.

DigitalOcean has no auto-scaling offering, leaving a customer to either A) implement their own auto-scaling mechanisms using the provided APIs or B) provision enough compute that they’ll hit their SLAs even in the worst case.

Option A) is both measurably (man hours and talent) and immeasurably (maturing a solution that only exposes flaws at the worst possible time) expensive enough that few shops are in a position to tackle it. Even for those, I don’t see a differentiator that makes DO attractive (e.g. API compatibility, reserved capacity, something else), and I don’t hear moisey steering us in that direction.

Option B) is what we are asking about. No answer means no guarantee, which in the worst case means zero capacity. If worst-case capacity for a single VPS is zero, an infinity of VPSes is still zero capacity. Any guarantee would be better than this, but DO isn’t offering one.

So reading between the lines, DO is:
1- an eligible, inexpensive solution for batch / offline processing*
2- not an eligible solution for response-time sensitive jobs
3- trying hard not to come right out and admit #2

3 might be explained as them stalling for time while they create an offering for response-time sensitive workloads. Or perhaps stalling while they refine their market strategy to either target or ignore those workloads.

Or, and I find this more likely, they don’t actually have a strategy beyond “be the cheapest / best value-per-dollar VPS provider and see who wants to use us”… which makes DigitalOcean perfect for running Minecraft or a small website, and iffy for anything more sophisticated than that.

(*) There are better value offerings out there for some specialized batch processing, e.g. video transcription.

Developers among you might want to try out a little tool I developed to detect CPU starvation, on github.

What it does is, on one or more cores (can be overridden, defaults to all cpus), do an atomic increment of a value in memory (on its own cache line) as often as possible. Another thread periodically checks and resets the number, and prints it on the screen. No sleeps are used, so the operating system won’t think you are idling or spinning.

If you watch it run for a while, you can note how much variation you see in the performance. You can import it to a spreadsheet and view it as a chart.

For what it’s worth, I tested a digitalocean 1-core 1GB machine, and while I did see a little variation, probably from neighbours, it was almost negligible. I was pleased.

You receive a logical core not a physical core which is similar to most other cloud providers.

Depending on the availability of resources this allows a virtual server to use up to the entire physical core when available.
Logical Core - what %age is guaranteed at a minimum?
Hi Raiyu and others at Digital Ocean,

Is there still no minimum guaranteed level of CPU performance? I could get all of a Xeon core or be totally starved? It would be great to get solid lower bound on physical CPU allocation even if it was indeed low, so we know what to expect.

For instance, I would like to know that my virtual CPUs will get at least x time slices on the CPU in every second, a definition of how small that slice was, and the specification of the lowest-end physical CPU in your cluster. This would let me calculate my worst-case expectation.

For most applications, the worst case is much more important than the best case or the average. The good news for you guys is that this information will probably cause your customers to over-provision rather than move away, as you are probably still going to be a well-priced option.

Thanks for any information you can offer.

Andrew Cox
Well, I think what Andrew is getting at, and what I'd like to know as well, is what happens when you deploy a droplet with greedy neighbors. Unfortunately, as a customer there is no way for us to test this by spinning up a Droplet, because we don't know who our neighbors are, so we could be fooled into thinking everything is alright, when it's not. In my experience with EC2, the ECU measurement has been a helpful and accurate guide in gauging ballpark capacity even though it's not perfect, it's better than the super ambiguous "core" that Digital Ocean advertises.
I was just browsing to understand this too and whilst I understand where raiyu is coming from Andrew has hit the nail on the head. What determines a greedy neighbour? If my neighbour is utilising 95% of the 2 cores I am allocated is it acceptable that I have to manage with the remaining 5%? If not then you must have some form of indicator as to what is excessive.

Many provider for example may say that I can't use more than 25% of my allocated cpu for more than 90 seconds at any time. Right or wrong way at least I know when I am going to get an email if I see my droplet matching that threshold.

But I could happily go along for months utilising 75% on a consistent basis to then find others start using more and complain because a noisy neigbour (me) is using too much so I then suffer because there is no limit to guide me.

I understand this may a be a difficult task for you, so could you refer us to someone that can give us an idea of our guaranteed cycles/virtual core? This is an important topic in this industry and your beating around the bush makes us all a little uneasy.
This is simple: what is the maximum ratio of logical cores to physical cores that you provision? 2:1? 3:1? 10:1? All we need to know is how many potentially other logical cores will be competing for our physical core. If you cannot answer this question outright, then you are seriously giving us the run around, because it's a straightforward question with a straightforward answer.
I am currently searching for a cloud suitable for CPU intensive tasks (meteorological modelling). I am thinking of trying digitalocean. An answer to the previous question (logical to physical ratio) would be a very good indication of whether it is suitable. Thanks.
+1 for jarred.nicholls question. Linode's faq page states the answer to this clearly, "On average, a Linode 1GB host has 40 Linodes on it. A Linode 2GB host has on average 20. Linode 4GB host: 10 Linodes; Linode 8GB host: 5." It would be reasonable to for DO to specify how many vCPUs the KVM scheduler has to manage on a hex-core (physical) machine.
I would suggest Elastx Cloud hosting (Jelastic) has some interesting performance after comparing DO.
+1 to this discussion. After all I found out the how digital ocean perform compare to industry.Thanks guy
I would also like to know about this.
It's disconcerting that no one from DigitalOcean has offered a follow-up reply on this thread since January. There are serious open questions about DigitalOcean's provisioning levels.

Of all the system resources we choose from when configuring a droplet, CPU seems to be the most costly. This fact, combined with the lack of clear guidance on machine provisioning levels is reason for concern. Especially when the competition offers a less foggy explanation of what to expect.

That said, I'm mostly concerned that DigitalOcean seems to be avoiding the topic. This brings the quality of customer support into question. Poor customer support is the ultimate deal breaker.
Moisey is well defending the castle. But I'm dumber than everyone else and I wonder how hostgator offers processors with clock speeds? 4.25 GHz they say, for example. Does that mean they're not using logical nodes?
The answers from Digital Ocean on this page are weak to say the least.

For a start, there *are* providers out there that allocate actual dedicated CPU cores. One of the larger examples would be dediserve. More expensive sure, but they exist. It all comes down to what hypervisor is used and how it's configured. It's not difficult to do this, contrary to the answers on this page.

But more pertinently, every major hypervisor out there has a way of guaranteeing a minimum CPU allocation for each container. Moisey mentioned Linode, so I'll use them as an example: Linode only puts one kind of package on a server (e.g. a server full of 1GB accounts), and they only put an explicit number on each server that *guarantees* something like 0.4 core CPU — which is enforced by Xen. If more CPU is available, great. If not, the minimums can be relied upon.

Digital Ocean refuses to give any equivalent numbers, which leads me to the only possible conclusion: it operates oversold servers, meaning that if every customer on each server were to be assigned a minimum CPU allocation (e.g. 0.2 core) and everyone used it the server would be saturated and operating beyond its limits.

So there we go. Makes Digital Ocean's low prices make more sense, right? Not saying this is necessarily a bad thing if Digital Ocean keeps things in check, but by its very nature such an oversold set-up is a bit volatile for my taste.
New customer here prior to testing my first DO server. I'm currently a Linode customer for many active months. I found this thread also looking for an understanding on how to judge what CPU resources I could count on here. The above @Moisey post suggest to test and compare in order to make an informed decision. However the same issue that was brought up a year ago still stands imo. What performance I get this afternoon can not give me a minimum baseline on what I can count on tonight or tomorrow.

Before moving to Linode I was testing other hosting companies with the same script under controlled conditions that provided a real world performance level for what I do. It was interesting that often what appeared to be the best deals of advertised RAM, CPU, Port Speed, etc. were often poor performers when comparing the low costs to the actual productivity. Linode did not at first glance appear to be of the best value when comparing strictly the advertised resources. However the bottom line was my productivity increased to where it has proven to be a real value. A large part of this was I could count on a set of resources that would never drop below a known amount.

This post is not meant to be an advertisement for the competition or to knock DO. It is another vote that without a known minimum performance level it will make it very difficult to considering scaling here. I fully intend to test extensively. Unfortunately the way my scripts run I need a known minimum performance level. Otherwise I will be too conservative and the value goes down, crash or I become a "noisy neighbor". This would make it difficult to consider DO for a large part of my hosting needs.

BTW I found DO by hearing overwhelming positive reviews on other forums. The # of people made it impossible not to give this a try. However I think most of those people have different needs than me. They spin up servers quickly to test a few scripts and then are done before needed again. I differ in that I need scripts to run weeks without interruption.

I hope to post back here with my experience. I would love for the unknown minimum CPU resources to be a non issue.
I typically get > 95% of one cpu for my $5 instance (based on tops "st" measure which is sometimes 2, typically 0 (and rarely it will hit 20)).
Feels like a full core to me! You can try it out yourself though...I have almost never encountered problems where I seem to get much less than 1 core.
@rogerdpack, I wonder if that's 95% of your _virtual_ cpu?
I think Digital Ocean should be able to tell us the typical ratio of physical cores to logical cores sold on one of their servers. They must have capacity management processes. Why don't we have this information? It's obviously not 1:1, and not 1:infinite, but somewhere in the middle - but where?
I must say that i've been using and testing Digital Ocean for the last few months and i was enjoying it very much, until i've decided to benchmark the CPU and DISK speed and IOPS and compare with another provider using the same config but without SSD.

It worries me that even after repeated requests, DO has not yet provided even an estimate for allocated physical CPU resources.

@moisey - As customers, we understand and appreciate that the general answer is “it depends”, with a hefty dose of “servers improve over time”. Your suggestion to spin up instances across providers and directly compare a benchmark of our choice is good, however this only estimates current performance, and if run for a bit, average performance.

Earlier commenters on this thread, and myself, are asking about worst case performance, which cannot be determined by any benchmark runnable from inside a virtual system.

There are two options:
a) There is no minimum CPU guarantee. This implies that “noisy neighbors” can crush your droplet’s performance.
b) There is a minimum CPU guarantee, likely in terms of minimum slices.

All we’re asking for is a straight answer - is there a minimum CPU guarantee, in any way, shape or form? No matter how this is measured or implemented (eg, percentage of a physical core), any answer is better than no answer.


  • I think they have pretty much said all they can say on the issue, asking the same dumb question over and over again isn’t going to change their answer.

    What do you not understand about what this guy is trying to tell you? It depends on a lot of different factors so giving you a “straight answer” or rather “what you want to hear” would be bad informtation.

    As a suggestion why don’t you fire up a droplet and stress test it to find out for yourself.

This is still an open debate and I still find Digital Ocean’s replies absolutely weak to say the least.

Is it really that hard to give the much awaited answer? Because if you can’t, you are either admitting that you are either heavily overselling (not necessarily a bad thing if you can keep your stuff together - but say it already if this is the case!) or you don’t know the answer, which makes this situation terribly embarrassing.

We are still waiting for a reply.

It’s been ~3 years, and still DigitalOcean has not gave us an answer yet, I find this very lacking of customer support.

It’s been answered. Unfortunately no one enjoys what the answer means for them. @stephenayotte hit the nail on the head.

To DO, be careful! This is the top hit on Google, and the implication it makes on your company is dreadful. I’m coming from a shared host, I’ve rolled a small droplet to test a startup business with low priority and little consequence. However, reading this thread I’ll be taking myself and 40+ clients elsewhere, where I know what I and my clients are receiving.

  • PS - I’m leaving my shared host because they outsourced their support department and they went from being extremely helpful to needing not only their hand held, but their arm twisted until they performed actions that needed to be done.

Show answer This answer has been marked as resolved by dfshijldsfsdkljhf.
Submit an Answer