The cloud has evolved. I’ve seen this evolution first-hand. I worked at Cisco pioneering ways to make applications accessible anywhere before cloud computing was even a term and at Digital Island, creating one of the original application hosting and content distribution networks. At Zynga, I built upon those experiences to help deliver cloud-based social games to 80 million players a day.

The insights I gained from my previous roles have been crucial to my current position — helping to scale the ServiceNow Enterprise Cloud to serve more than a third of the Forbes Global 2000. While B2C and B2B are very different, the lessons learned earlier in my career have helped us push the limits of availability, securing enterprise data, and performing billions of monthly transactions. They have also lead me to believe that in five years we won’t know that the services we’re using at work and at home are Internet connected.

Over time, I have come to understand three key principles that guide the scalability, stability and accessibility of cloud applications and the supporting infrastructure. These principles include: building to scale, making the cloud invisible and refining to perfect.

 

Building to Scale

In my role at Zynga, we launched games that were immediately accessible to tens of millions of users. We built a cloud application and infrastructure that allowed for sudden spikes in usage depending on the day, the time, the game’s popularity, and the long-term interest and loyalty of users. Launch day was almost always full of unknowns. As a result, we had to plan for both the best and worst case scenarios so that the cloud worked as well as it could under stress and scale.

While there are some peaks and valleys for the enterprise cloud too, it must be built with incredibly high availability and highly tolerant systems to ensure that it scales in a way that deals with traffic bursts that don't just happen over a day or two but over the lifetime of the enterprise. There are all sorts of things that need to be modified as your application scales and grows in the cloud.

For years, overprovisioning has been how scalability has been addressed. To ensure the scale that is needed for the application to perform excess storage, hardware, bandwidth, and RAM are provisioned to the anticipated need. However, this approach is not effective and becomes cost-prohibitive to the business. One good rule of thumb is to plan for an order of magnitude increase in your cloud environment — not necessarily to spend to that level — but to understand if what you are building will scale 10X.

It is difficult to design and build a cloud environment from the ground up when you don't know what difficulties will be faced with the applications that are being developed. While intelligent decisions can be made based on experience, hardware and software modifications will need to be made along the way. Continual modifications may seem more expensive, but are actually much more cost-effective in the long term to ensure that the cloud infrastructure will meet the needs of the application.

For example, printers, phones, and video conferencing systems are a basic part of the Internet of Things. They are fairly reliable and consistent and most clouds only need minor modifications to support these types of devices. But we are seeing some very large “things” in the Internet of Things too — cooling equipment, larger commercial printers, aircraft engines, and even medical devices that sit next to a hospital bed that must be monitored. Each of these types of devices have different types of instruments that need to be monitored, and this must be taken into consideration in the design of the cloud architecture so that data that is captured is visible to the right people and in the right workflows. Modifications to the software such as the configuration management database will need to be made to recognize these “things,” along with modifications to the network to handle the traffic needed to safely and securely transmit large volumes of data to the appropriate users.

 

Making the Cloud Invisible

Next, the cloud has to be invisible to users. Because of this, it is essential to go back to the application developers and understand how the software works and operates in the data center, the network routers, firewalls, load balancers, middleware stacks, etc., and make them invisible to end developers. Developers don’t want to have to think about the hardware or what vendor is powering which part of the infrastructure. They just want their application code to work the way they expect.

The reality is that some clouds are better built to handle highly centralized workloads. Others are better suited to handle distributed computing or mobile workloads. As we design the cloud, we have to think about the particular application and consider the parameters that we need to implement to make it work and be invisible to developers and their endusers at the same time.

What you need to do is to study the detailed flow of data for the application starting in the application logic, down through the virtualization and operating systems layers, accessing the storage devices, onto the network, looking at east-west and north-south traffic patterns, through the load balancers, firewalls, intrusion detection devices, and so forth. If you can draw a thread all the way from the application logic to the data center floor that literally is the building base of the cloud, then you can start to fine-tune the infrastructure to meet the needs of the business.

By studying and understanding in detail this application flow, you can find pinch points in the infrastructure — places where the network is not quite fast enough, or where more memory or more storage or more security is needed. These pinch points are where users see through and the cloud becomes visible.

To make it invisible again, it is crucial to find and make those pinch points go away. This may mean provisioning new servers with a different memory footprint, changing security, or purchasing solid-state storage based on the needs of the application and network. This often takes multiple iterations to get right — there is often not a magic bullet that makes it suddenly perform well. If you iterate properly, then the outcome is that the proper infrastructure is built for the needs of the business.

 

Refining to Perfect

Agile development philosophies encourage developers to provide continuous delivery of valuable software and refine it as needed to provide technical excellence. There is an old developer saying that says, “No good product survives contact with the customer.” We can’t take an application in a development environment and roll it out to production and scale it up to a million users and not expect to experience some bumps and hiccups along the way which need to be refined and perfected over time. In fact, how a user uses an application may change over time and as a result, it may need to be refined.

I’ve tried to engrain into my team’s culture this continuous development mentality. We must help our customers identify issues, understand why the issues are happening, and watch the flow from the data center floor through the application stack in order to make it all more efficient. In taking the time to do this we build a team that not only can fix issues, but that understands the entire flow and sees much more macro issues. Too often we find people who are great at the development of applications or scaling databases, but they really don't understand how their software connects to the hardware or to the network and firewalls. This must be done while continuously tweaking and refining so that it becomes perfected over time.

There's no way to come out of the gate with a perfect product. Applications must be refined along the way because you don't know what the customers are going to do with it and how it will interact with the infrastructure.

Applications must be continually refined, changed and modified over time according to user needs — keeping in mind availability, security, and the need for a performant solution.

 

The Future of the Cloud

There will come a day when most users will not be able to tell whether the application they are using is internet-connected or cloud-based. This is because all clouds will be highly available, secure, and will scale according to the needs of the business. Organizations won’t think twice about putting data in the cloud because they know that it will be secure wherever it is. Developers will refine products and improve them along the way. As part of that refinement, artificial intelligence will be incorporated more into applications that will not only address privacy concerns but help workers be more productive and efficient.

This will occur as developers apply the three key principles of building to scale, making the cloud invisible, and refining to perfect.

 

This article was originally posted “3 Key Principles To Enterprise Cloud Evolution” from Cloud Strategy Magazine.