Reexamning Cloud On-Ramp: Access to Your Cloud Applications May Not Be as Direct as You Think

As businesses become more dependent on high-speed connections, the performance difference that a direct                            connection provides becomes increasingly important.

The primary goal of any cloud strategy is to accelerate and optimize access to business-critical applications. Whether you are merely using Microsoft Office 365 or looking to stream rich voice and video traffic in real-time, the fastest possible connection is always going to provide optimum results. Solving this issue for an individual – say a super user trying to replicate their in-network experience through their home office – can often be solved with a faster connection. But more often, organizations are working to resolve the challenge of a branch office comprised of multiple users all looking for optimum user experience across an often limited WAN connection. And the problem only escalates further when an organization is trying to address this issue across a collection of branch offices distributed regionally or globally.

A growing number of organizations are turning to SD-WAN to become cloud-ready and improve the branch user experience. Flexible and dynamic business rules that automatically search for, monitor, and transition to the most optimal connection are transforming branch offices around the world. And for individuals who need to shelter in place, desktop versions coupled with LTE can maintain a high-speed connection even when the home network’s bandwidth is being monopolized by other devices and users in the home.

The Power of Cloud On-Ramp

Central to that optimization process is cloud on-ramp. This is a strategy designed to ensure that in addition to connectivity, application performance is further enhanced by selecting optimal application paths. From a routing perspective, the Internet is exceptionally volatile, which means that the reliability of a connection can change every few seconds. By leveraging cloud on-ramp, organizations can reduce the amount of latency inherent in an Internet connection to maximize application performance.

So In a nutshell, cloud on-ramp provides a sort of shortcut to whichever cloud environment is hosting an application, whether it’s a SaaS or an IaaS environment. Think about it this way: if you were in New York and you had to get to Los Angeles, there are many different paths you could take. You could fly from New York to Chicago, from Chicago to Denver, and then from Denver to Los Angeles. This is the equivalent of routing an application across the Internet. Or, you could get on a direct flight from New York to Los Angeles.

That is the equivalent of a cloud on-ramp. It gets you there faster because it only uses the most direct and optimized paths. There are no stops, no layovers, and no delays. To achieve this, what most SD-WAN providers do is deploy a controller (or a handful of controllers) in the cloud that their SD-WAN solution connects to. That controller then connects to a high-speed application broker that specializes in providing optimal paths to critical applications across a dedicated, high-speed backbone.

Not all Cloud On-Ramp Solutions are Alike

Almost every vendor who provides a cloud on-ramp solution provides some version of this. However, application performance is dependent on how far away an SD-WAN edge device is from the vendor’s cloud controller. This dependency may also impact the cost of that connection. The difference is the length of the on-ramp path, which is the distance between the physical SD-WAN edge device deployed in the branch office and wherever its controller has been deployed in the cloud. The greater the distance between these points, the more likely it is that your connection will still end up bouncing around the Internet. And the more hops, the greater the potential for latency.

The better option is to eliminate that giant leap between your SD-WAN edge device and the cloud controller. That starts by using an SD-WAN solution that has a controller built directly into the SD-WAN device. And next, to use an application broker service like Equinix or Teridion with not one, but hundreds of potential access points to its backbone.

That way, rather than having every branch try and connect to the same controller in the cloud, regardless of how far away it is, and then connect to a high-performance application backbone, your New York branch office can connect directly to the on-ramp that it is physically closest to. And your offices in LA and Singapore can do the same thing. As a result, regardless of their physical location, users will experience more consistent and optimized application performance resulting from a distributed control plane architecture, rather than relying on a centralized cloud controller – which, frankly, is a much more archaic networking model.

The Fastest Route to Cloud Applications is Always the Shortest and Most Direct Path

As businesses become more and more dependent on high-speed connections, and the new applications we rely on, such as streaming rich media, become more latency-sensitive, the performance difference that a direct connection provides becomes increasingly important. This only happens when an SD-WAN edge device can identify the applications being used and then dynamically choose the most direct path to that application – rather than leaving that decision up to some controller deployed far away in the cloud – because the shortest, and fastest distance between two points is still a straight line.

Filed in: News

Recent Posts

Bookmark and Promote!

© IT Voice | Online IT Magazine India. All rights reserved.
Powered by IT Voice