Question

Server Specs 1 Core Equivalent to

  • Posted on October 26, 2012
  • dbitlaAsked by dbitla

Can you clarify what does 1 Core equate to? Is it 100% of a Core in a Quad Core machine or is it shared with multiple Droplets? If so how can we know what % of Cores are allotted to what Droplets?

Show comments

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Developers among you might want to try out a little tool I developed to detect CPU starvation, on github.

What it does is, on one or more cores (can be overridden, defaults to all cpus), do an atomic increment of a value in memory (on its own cache line) as often as possible. Another thread periodically checks and resets the number, and prints it on the screen. No sleeps are used, so the operating system won’t think you are idling or spinning.

If you watch it run for a while, you can note how much variation you see in the performance. You can import it to a spreadsheet and view it as a chart.

For what it’s worth, I tested a digitalocean 1-core 1GB machine, and while I did see a little variation, probably from neighbours, it was almost negligible. I was pleased.

moisey has missed the point, but we can infer a lot based on his responses.

Getting burst capacity for free sometimes is a selling point for batch processing, or any other workload whose capacity can fluctuate widely over an extended period so long as it evens out over time. Emphasizing the “value” of the compute cycles based on the high disk i/o suggests this is a conversation moisey is accustomed to having.

But we are asking about worst-case performance; clearly we are concerned with capacity at a very particular time, such as at peak traffic for a service with an SLA. Fuzzy guarantees can work out for SLAs when coupled with elastic auto-scaling. E.g. the noisy neighbor is noisy for 1 hour, during which you pay for a second instance on cooler hardware, but then later you get free capacity on your single instance and make up the difference.

DigitalOcean has no auto-scaling offering, leaving a customer to either A) implement their own auto-scaling mechanisms using the provided APIs or B) provision enough compute that they’ll hit their SLAs even in the worst case.

Option A) is both measurably (man hours and talent) and immeasurably (maturing a solution that only exposes flaws at the worst possible time) expensive enough that few shops are in a position to tackle it. Even for those, I don’t see a differentiator that makes DO attractive (e.g. API compatibility, reserved capacity, something else), and I don’t hear moisey steering us in that direction.

Option B) is what we are asking about. No answer means no guarantee, which in the worst case means zero capacity. If worst-case capacity for a single VPS is zero, an infinity of VPSes is still zero capacity. Any guarantee would be better than this, but DO isn’t offering one.

So reading between the lines, DO is: 1- an eligible, inexpensive solution for batch / offline processing* 2- not an eligible solution for response-time sensitive jobs 3- trying hard not to come right out and admit #2

#3 might be explained as them stalling for time while they create an offering for response-time sensitive workloads. Or perhaps stalling while they refine their market strategy to either target or ignore those workloads.

Or, and I find this more likely, they don’t actually have a strategy beyond “be the cheapest / best value-per-dollar VPS provider and see who wants to use us”… which makes DigitalOcean perfect for running Minecraft or a small website, and iffy for anything more sophisticated than that.

(*) There are better value offerings out there for some specialized batch processing, e.g. video transcription.

Hi, <br> <br>Thanks for your feedback. <br> <br>Unfortunately CPU is not as static a measure as RAM or HDD which is why those things are segmented and simpler to grok in how they are allocated. CPU is much more dynamic. <br> <br>We provide the number of logical cores that each plan comes with to help customers understand how much parallel processing they need and can utilize but outside of that every provider that doesn’t directly tie a specific physical core to the virtual plan they are selling employ some sort of balancing. <br> <br>This is true of Linode and Amazon which is also why they do not provide that information expressly. You can certainly reverse compute it with formulas and guesses. <br> <br>But more importantly is that instead of going by any of these published or interpreted numbers it’s so much easier to simply spin up a server and test performance. <br> <br>This is because in most other clouds that aren’t operating on SSD only don’t discount how much performance you lose over disk contention. In fact many times that you see high CPU you may actually be waiting on disk IO which will come up as higher CPU utilization simply because you are waiting on a read or write. <br> <br>So it’s really much simpler to spin up the same spend on different clouds ($20 on DO vs $20 on Linode vs $20 on Amazon) and run a benchmark across them and just choose whichever performance provides the best bang for your buck. <br> <br>Thanks!