The race to innovate in cloud networking has increased to a sprint. Most recently, Microsoft announced the latest coming out of Project Catapult — the decades-old field programmable gate arrays (FPGAs) that last burst onto the scene in a meaningful way with Bing. FPGAs are being brought to the forefront again as a way to increase the speed and efficiency of Azure while decreasing its cost. The effects of these cloud networking innovations will be felt in myriad ways, so it’s important to explore how these technologies — using Microsoft FPGAs as an example — will start to take shape in network environments.

 

Decoding the Acronyms

With all of the different custom server technologies at its disposal, it’s important to explore why Microsoft is going in the direction of FPGA vs. central processing unit (CPU), graphics processing unit (GPU), or application-specific integrated circuit (ASIC).

The CPU is a general purpose processor with a broad published instruction set, and while not a speed demon, it can do everything from IP address resolution to analog decoding and graphics. That is why these are ubiquitous in nearly every device type, from phones to computers to embedded devices. On the other hand, a GPU has hundreds, or even thousands, of cores, each performing only a handful of tasks, but doing so very quickly, thanks to custom silicon, programming, and parallelism (think Bitcoin and NSA data centers). Finally, ASIC — the network equivalent of a GPU — is a custom chip that knows how to route network traffic without all of the reporting fuss. It moves packets efficiently and quickly, but slows considerably for non-routine tasks. Each of these chips requires an element of custom-building with tradeoffs in speed and efficiency.

What Microsoft did with Bing, and how Bing was able to catch up to Google, was it started looking for ways to achieve neural net processing and machine learning, knowing that Bing would need to have the kind of performance dedicated chip processors deliver, while also allowing them to adapt over time. So Microsoft turned to a slightly old-school technology: FPGAs. They started putting them on their servers instead of having specialized compute nodes. Microsoft distributed the programmable chips in each one of their servers to localize task-specific compute power that was much more efficient for certain workloads than the servers themselves.

Perhaps Bing’s legacy won’t be that it became a respectable challenger to Google search, but that it launched the architecture behind Azure Project Catapult, the distributed FPGA network. With its FPGA-based network, instead of building custom chips, Microsoft built a distributed network of reprogrammable chips designed for machine learning and other capabilities, including softwared-defined networking (SDN) and routing as part of a beneficial standard infrastructure that is unique to Azure.

It’s important to note that alongside recent announcements about FPGAs, Azure also lowered its pricing, further accelerating the price and efficiency war between cloud goliaths Azure and Amazon Web Services (AWS), and in the process, making the networking in the cloud race a bit more interesting.

But while this is all a very high-level examination of the industry, the question remains: how will this actually affect IT professionals?

 

Reality for IT Professionals: Hyperconvergence and DevOps in a Hybrid IT World

Hyperconvergence is where we are likely to see this all come into play for the IT professional. For example, Azure Stack is a Microsoft version of enterprise hyperconvergence and essentially allows one to deploy Azure in a data center. It’s blurring the lines between enterprise and cloud technologies and making Azure increasingly attractive for the enterprise. With Azure Stack everything works like it does in the cloud, but on-premises. Microsoft is essentially pushing highly converged capabilities into a rack of homogenous systems side-by-side, supporting a common management and monitoring toolset and transitioning administrators, finally, from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS).

With Azure in the data center, more IT professionals will be moving on-premises workloads to Azure Stack. This is because it will allow them to programmatically manage enterprise infrastructure, and ultimately hit a button and move elements of the infrastructure to Azure. Talk about hybrid IT.

This introduces another approach to hybrid IT and an expansion of the DevOps function in two ways: via cloud networking technologies being applied to enterprise environments (and on-premises businesses seeking to hire Azure developers, for example), and the natural adoption of DevOps as a necessary function for anyone managing apps by APIs and not GUIs.

 

Best Practices

With the explosion of cloud networking innovations leading to hyperconvergence and an increased blending of traditional and cloud technologies in the enterprise, IT professionals need to be armed with best practices to keep pace with the changing landscape. They should consider the following:

  • Expand understanding of monitoring. Effective network monitoring today means looking at elements from components of the application stack (databases, servers, storage, routers, and switches) to internal network firewalls, internet path, and Software-as-a-Service (SaaS) provider internal network monitoring. Although it’s necessary to be able to get information about the components of application delivery for detailed troubleshooting, from a monitoring perspective, it’s more important than ever to do user experience monitoring across all elements of the delivery chain, including the internet and service provider networks.

  • Learn the intricacies of virtual private cloud (VPC) networking. This involves security policy management, policy group assignment, and security policy auditing. In short, IT professionals can no longer get by with just knowing how to secure internal networks; they must understand how to replicate this process in their VPC.

  • Focus on understanding how bulk traffic travels. When running backups in on-premises environments, the only concern is if offline analytics processing runs at the same time as backups, and if these should be separated to avoid overloading storage. But in cloud environments, this is much more complex and involves understanding where backups are going and where processes are happening. IT professionals should keep an eye on the evolving nature of network trafficking — LAN, WAN, and VPC networks.

  • Hit the books. All of these technologies will require a burst of education to get caught up. And IT professionals shouldn’t wait! These innovations are coming fast and furious and it’s important to keep skillsets fresh to adopt the DevOps mentality.

  • Re-evaluate services regularly. Technology is evolving so quickly, and the services being offered by cloud providers are much differentiated. Vendors are constantly adding capabilities and catching up with one another, like with FPGAs and AWS and Azure block chain services. Having an understanding of these ever-evolving service offerings is important because business will look to IT professionals to be experts in these services just as they would with enterprise technologies.

It can be very difficult to keep up with all of the changes to cloud networking and how these will begin to affect IT professionals in hybrid environments. But understanding the practical viewpoint of these technologies’ predicted effects on IT will enable IT professionals to think about this in a level-headed manner and approach the future of the business with confidence.