More powerfull cpu
In my case is more significant the power of a single CPU core than the total number of CPU. (In my current dedicated server I have disabled the hipertheading)
I'm testing you're server in order to transfer a performance critical application I have in a dedicated server. My current bottle neck is on disk io so I thought that your ssd based service will solve my performance issues but:
Although the SSD increases the respond time of the first run of a query, all subsequent queries are CPU bounded and are significant slower than my current non-SSD dedicated server.
As you can see from the other suggestions many people want higher cpu counts. Let me try to convince you.
I assume the reason you don't allow this is simply because you have scaled your data center linearly, so you have less CPUs to go around. Many developers like myself have tasks that are CPU bound and we don't really need ram, disk space, or network speed. Lets say I needed 16 cores, I would have to run 16x $5/month 512 MB instances at $80. That a whole lot of instances I now have to manage, and with a whole lot of memory, and space that I am wasting. Even if you sold me 8 cores with at $40 with 512 MB specs in one instance, it would be almost like selling me 16 of them but you keep most of the resource. The added benefit for me is that I only have to manage one instance instead of 16.
You get way more profit, and I win too. What do you think?
For example 4GB / 4 Cores .... 8 GB / 8 Cores or make droplets with more CPU power and make price for this.
The plans that you guys have are amazing, but in some cases you provide too much ram and too few CPU resources.
Taking for example the 20 usd/mo plan,
40GB SSD Disk
I find it much more helpful to have, let's say, 4 cores, 30 GB SSD and just 1 TB transfer with 1.5 GB ram.
Or something that goes into that direction.
tl:dr; more cpus, less everything else, same price.
This is why I want to host another application at Linode. The SSD/HDD comparison doesn't matter since I'll be caching files heavily anyway.
Thanks for the feedback, however the situation is a bit more complex. When it comes to a cloud or a VPS provider there is no "apples" to "apples" comparison of CPU possible. This is because unlike RAM which is strictly segregated, CPU and disk IO are not strictly segregated but shared.
This means that it is impossible to really accurately predict how that utilization will playout in a production environment with mixed workloads from different customers on the same hypervisor.
Our plans for RAM do scale linearly because this is the resource that we have absolute control over, we also scale our HDD space linearly for the same reason, except the smallest plan starts with a bit more HDD space as a bonus because if we scaled linearly there it would come with 10GB to start and we felt that was a bit on the small side so we bumped it up.
When it comes to CPU it is important to not forget that in cloud environments or VPS it is not always the CPU that gets taxed, but instead the bottleneck becomes the disks. This is because if you review your CPU usage and your applications you will see that often times your CPU usage is also tied to reading/writing from the disk. As a result when disk IO runs out your CPU usage spikes. This is because an application is "waiting" on the disk to either finish a write or a read before it can process the next instruction, as a result your CPU usage begins to increase even though there is actually more CPU available.
This is why we went with an all SSD cloud because we know that more often than not most customers workloads will tie up the disk before they tie up the CPU. So while we may provide fewer cores by having faster disks underlying the infrastructure you get more CPU. This is because even disk IO isn't completely saturated all of your read and write requests are finishing faster as well so the CPU can move on to the next instruction.
That is why the best way to really compare two hosts is to run a production environment on each and compare the price to performance. Running benchmarks is helpful but unfortunately with benchmarks they aren't really mimicking a real production workload because they usually just like to hammer things in a very predictable manner, where as a production setup has much more randomness built into it.
The way that we have setup our infrastructure is to allow for bursts in CPU to be processed faster again with the idea that if there are also SSD drives that will have a larger workload clear through faster and then cause less overall contention. So while you may get less "cores" you get more of the core for processing.
The last item to consider is your actual application. Depending on your stack or application it may not be very multi-threaded, so even if you have 8 cores you end up really utilizing only one.
Ultimately I would recommend setting up a second app, web, db, or whatever server and running it in production on another provider, whether it be us or someone else, and then compare the CPU utilization and ms response times against price.
Think of it like this, you can have two cars that have 500hp, but why does each one accelerate 0-60mph differently? That's because many factors come into play such as aerodynamic drag, gear ratios, powerloss through the drivetrain, grip, rolling resistance, etc.
It's the same here, because there is no specific CPU unit, unlike RAM where 1GB = 1GB, this means that a direct comparison isn't possible.
@Sharath Win: Agree to the letter...
Sharath Win commented
I just did a basic comparison of the resources offered for every price point in DigitalOcean vs Linode.
Linode resources & prices are more predictable for every price point. Everything (RAM, 8 CPUs Priority, Disk, Bandwidth) Doubles for every price point and very easy to understand and plan on the capacity for resources.
Digital Ocean prices are definitely low cost when RAM alone is considered for the comparison. However the Disk, CPUs and the Bandwidth offered is no way in comparison to what Linode offers and its difficult to understand how that resources are set for each plan.
Please add CPUs also for the comparison to help understand the price | resource comparison
It would be great if DO can offer better resources or alternate way of getting the resources such as CPUs, SSDs and Bandwidth.
Just do a simple analysis for $80 plan, the resources offered are:
80GB SSD Disk
For same $80 DigitalOcean gives out more resources on Entry level plans ... Not sure why DO doesn't realize this and why it penalizes the users who shows interest in bigger plans!
If we take $5 droplets (512MB Memory, 1 Core, 20GB SSD Disk, 1TB Transfer) - for $80 16 droplets can be purchased, with the total resources across all droplets:
320GB SSD Disk
possibly on different physical hardwares - even super!
When the above is possible, why can't DigitalOcean offer the similar the resources Doubling for every price point? It will be the same cost for DO and its more predictable for the customers!
If you offer low cost plans with more resources, why penalize the users when they opt for bigger plans? I think it should be discounted when we go for bigger plans than penalizing on hardware resources.
Are you encouraging all your users to be on $5 plans?? This will waste lots of computing resources to have Linux run for every single instance - too much of unwanted CPU, RAM, Disk wastage for each Droplet. "LETS GO GREEN" and save the computing resources by encouraging the users to go for bigger plans with better prices|resources.
Hope you guys do some thinking and apply some logic to your pricing and plans and resources - that will make people go for bigger plans than fiddle with all these smaller plans!
Rackspace also does this right (and SliceHost before being acquired by Rackspace). They do a good job at splitting up CPU and also bandwidth. You have to remember the bigger your server, more you pay, the bigger the pipe you should have.
One of the BIGGEST problems with hosting like this is resource sharing. If you have a client who has an application that becomes problematic...Or they just are downright abusive...Others will suffer and that's really not good. I understand that sometimes this means a user can get more CPU than what they pay for if others aren't using it and that's cool and all...But we have to put a priority on consistency first.
I have to agree with Roger here. That is the one thing that makes Linode more appealing at the moment. Overall everyone gets the maximum potential of the CPU if other nodes are not using it, but those that pay higher get a higher priority in that process. So performance is comparable based on the idea of potential rather than predefined constraints.
roger pack commented
Just to make sure I'm being clear, I believe the original poster here is actually requesting an option to spend more money for "faster cores."
My suggestion is a bit different here, it's actually to "do what linode does" here and, given a box with 8 cores and 8 single core accounts on it, guarantee each account "at least one core at all times" but, if one account's 7 neighbors aren't utilizing their cores, to allow the extra, underutilized cores to be used by each account. Effectively multiplying cpu power for each account, without adding extra cost.
Agreed, our app runs OCR, which is totally CPU dependent, so having a fast CPU is imperative before making a switch.
Shelby DeNike commented
Would love the ability to add more cores to any plan as needed as I require more CPU than I do disk or bandwidth.
Ivan Lagru commented
This sounds great, the upgrades should apply to the single core and dual core instances I guess, right?
Es Cendol commented
It says Planned, any one have getting upgrade on CPU?
Banh Mi Cua Em commented
WHat you pay is what you get, buddy.. This is more like VPS not a real CPU from dedicated. But it cheap.
Is the update active yet? Looking forward for it :)
Matt Razza commented
What really kills our performance on the smaller EC2 instances (and I think it's the issue with DO as well) is steal time. We can have prolonged CPU bursts and the hypervisor starts taking cycles away from our VM. The EC2 medium instances have a higher CPU priority so we get basically zero steal time.
Chirag Patel commented
Hey guys. I'm at LiNode at the moment and only reason holding me back is the CPU juice. Can you please do this sooner. We can't wait to be on fully SSD environment. :)
Keep the good work, its great for us (the customers)
Scott R. commented
If you use snapshots to build identical loadbalanced servers, dynamically created or destroyed, DO has a major advantage
Kenn Ejima commented
CPU Price Performance: DigitalOcean vs Linode
If you have multiple app servers (like Rails), Linode is a better option. Like, Rails server x 4 (x8 cores each) + 1 DB server.
DigitalOcean is better and well balanced when you only have one server and is more oriented toward scaling up rather than scaling out.
Not sure of the sizing of the medium instances without specific metrics, but if you are running a single threaded application as you mentioned getting more cores won't increase performance as the remaining cores will be sitting idle.
Matt Razza commented
We're looking to spin up and down instances of a real-time single threaded application to scale with compute demands. We've found Softlayer and EC2's medium instances provide a solid experience but we haven't been able to get the same performance out of digital ocean - this is completely CPU bound and getting additional cores provides no improvement as the application is largely single-threaded.
Scott R. commented
Technically, for the price of linode with 1GB of ram, DO gets you 2 cores at full utilization, with 2GB of ram
DO has more ram, less CPU
Depending on your tasks, particular with caching, you can do tradeoffs
Running a website?
APC, static caching, and varnish can eat a lot of ram, but it reduces CPU usage