Cloud hosting has been in existence for 10 years. Going into its second decade, it’s time for something new. So, what’s the latest trend? Good old, plain dedicated servers — but rebuilt and reinvented.

Although it is not required by any definition of a cloud, virtualization has become part and parcel of cloud hosting — not because it is necessary for clouds, but merely because it is very convenient.

First and foremost, cloud is great for businesses — of all sizes and in a variety of industries — because it allows them to manage their resources easily. There’s no need to pre-order a server and wait until it’s ready. Virtualization has brought effective tools for that. On the other hand, for providers, virtualization brings a theoretical possibility to utilize resources more efficiently and thus to have lower costs and lower prices for endusers.

It’s worth noting that in a virtualized environment, customers always have to share resources. As a result, even when resources are generally abundant, at certain times, cloud servers on some nodes might experience resource starvation. There are several ways to mitigate this possibility. Providers can keep larger reserves (which yields to less efficient utilization and higher price) or introduce some complicated billing mechanism that will regulate such situations by incentivizing companies with some extra costs (e.g., CPU credits on Amazon).

On top of that, many systems still do not scale-out and scale-up, nor do they show any tendency to be ready to scale-out. Anyone who ran SAP HANA can confirm this point. And that’s not the only example. For such software, running on top of a virtualized cloud is not a viable option. It normally requires more and more resources, and the overhead of virtualization can be unbearable.

With the rise of Docker — an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud — consolidation can be done by a company itself, not by a provider. Running Docker containers is a no-brainer compared to running a full virtualization stack. It can be done by companies’ developers themselves and does not require a dedicated person.

So, the real next big thing in hosting will be baremetal servers — very similar to good, old dedicated servers. From our experience, companies are buying baremetal servers and building their own clouds on top of that. Sometimes they are going for full-featured virtualized clouds, sometimes for simple Docker-based solutions.

However, we are seeing certain workloads running on baremetal servers without any “cloudification.” One good example is Aerospike, a data store heavily used by the AdTech industry.

Baremetal servers can be ordered and deployed within 30 minutes, which is still a lot compared to cloud servers spinning up in less than a minute. One good barometer is to ask yourself, “How often is it critical for me to have an instance up in one minute or less?” The answer should be “rarely,” but not “never.” That’s why cloud should still be present in a hosting provider’s portfolio — and it should be integrated with dedicated servers. It’s just a matter of proportion: the pendulum has swung to the cloud side, and is now moving back to baremetal.

Contrasting cloud and baremetal hosting is not technically an accurate dichotomy. Cloud does not require virtualization by definition. That is why I firmly believe that baremetal is the new cloud. Even on the entry level, at the price level of under $10 monthly, baremetal is slowly gaining momentum. This is thanks in large part to the success of ARM technology, which enables running a baremetal — yet very small and cheap — server.

However, in order to win the positions back, baremetal servers should be provisioned fast, integrated with “classical” virtualized cloud, and provide easy and convenient management interface including API.

For companies looking for the “new” cloud hosting, baremetal servers represent a viable, effective base on which to build their clouds.