From Data Center to Cloud and Back: Cloud Repatriation & the Edge

From Data Center to Cloud and Back: Cloud Repatriation & the Edge

Last week we launched a special report series on the hybrid cloud. This week, we’ll look at how cloud repatriation and edge computing are impacting enterprise infrastructure needs and requirements.

hybrid cloud

Get the full report

Over decades of IT practices, there have been many swings of the pendulum between centralized and distributed resources. Lower costs of computing shifted workloads from a centralized approach to more distributed resources while the availability of increased bandwidth enabled more a distributed workforce with centralized resources necessary for software licensing, data security, and today’s work- from-home/work-from-anywhere (WFH/WFA) world.

Enterprises are now faced with two different trends impacting infrastructure needs and requirements. Cloud repatriation, bringing back workloads from public clouds into enterprise-controlled resources, is a growing issue as organizations assess underlying issues of cost, security, availability, and in-house skills. Edge computing continues to be an evolving concept with the basic principle of moving compute power closer to where the data is for faster responsiveness.

Cloud Repatriation: Bringing Workloads Home

While “Cloud First” was a good mission slogan, many companies who have reviewed their public cloud usage have seen IT costs increase, performance drop, and/or compliance issues arise. CIOs and IT leaders are examining public cloud investments to see if they have delivered lower costs and a solid return on investment, especially in evaluating surge purchase of cloud services in response to the COVID-19 pandemic.

Business applications moved to a public cloud environment may not perform at scale, especially when the economics of storage requirements come into play.

In 2019, IDC predicted up to 50 percent of public cloud workloads would be moved to on-premises infrastructure or private cloud due to security, performance, and cost reasons, while a 451 Research report stated 20 percent of companies it surveyed said cost drove them to move workloads from public clouds to private ones. Business applications moved to a public cloud environment may not perform at scale, especially when the economics of storage requirements come into play.

Dropbox is a prominent example of an early cloud repatriation. The company started in the cloud using Amazon Web Services and shifted to its own data centers in 2015, relocating nearly 600 petabytes of its customer data to its own network of data centers. Today the company has precision control of its IT costs as it continues to grow, taking advantage of the latest technologies to cost-effectively increase storage density at a pace it controls.

Repatriation can deliver substantial benefits, providing an IT managed infrastructure stack that will scale with more predictable costs. Infrastructure can be optimized for performance based on workload application and user needs. In addition, workloads may need to be moved back to private clouds for security and compliance requirements.

High-end colocation provides a “clean slate” approach to repatriated workloads. Existing data center operations can continue without the potential for disruption while colocation provides the ability to scale physical infrastructure on an as-needed basis, especially if hyperscaling is anticipated.

Edge Computing: Different Definitions, Different Requirements

Today’s IT environment includes more tools than ever before, with edge computing the latest option to improve performance. Essential compute functions requiring rapid processing are performed closer to the end-user, typically in cases where results are time-critical, milliseconds of delay may affect user experience or processing, or in cases where there’s a large amount of data with a lot of non-essential information and it makes more sense to filter out what’s needed and send it along than to consume bandwidth.

Whatever the scenario, edge computing today is a tailored solution for a specific requirement. Edge doesn’t replace existing cloud and data center resources but off-loads them, rapidly processing data close to the source with filtered and analyzed information flowing back to the cloud and data center for storage, backup, and analysis in aggregate. Updates and management of edge resources will always take place through IT and the data center, since the data center, clouds, and edge complement each other.

There’s no one single model as to how edge computing should be implemented. Cell phone operators may implement edge computing simply by placing a couple of racks of servers in a controlled environment shelter at the base of a tower. Verizon has teamed with Microsoft Azure to place edge computing services directly at the customer site to enable low latency high-bandwidth applications involving computer vision, augmented and virtual reality, and machine learning.

The Microsoft Project Natick module takes edge compouting to the bottom of the ocean. (Image: Microsoft)

The Microsoft Project Natick module takes edge compouting to the bottom of the ocean. (Image: Microsoft)

For greater compute requirements in remote locations, Microsoft has deployed shipping containers full of densely packed racks and demonstrated sealed undersea data centers designed to be “lights out, hands off” resources capable of operating for 5 years or more. Such deployments place computing resources in close physical proximity to users to provide low latency and high responsiveness for workloads in such areas as mining, oil and gas production, and military operations.

An edge computing solution doesn’t necessarily have to locate servers physically at the customer location but can take advantage of high-speed broadband and short distances between data inflow and compute capacity. Lumen, formerly CenturyLink, has invested several hundred million dollars to build over 60 edge compute locations worldwide to provide low latency computing resources for its customers, with fiber providing the connectivity between customers and the closest edge center. The carrier believes the “metro edge compute” design is ideal for supporting smart manufacturing, point-of-sale transactions, video analytics, retail robotics, and various IoT use cases.

Enterprises are utilizing the resources at high-end colocation facilities to create their own edge solutions, leveraging access to low-latency fiber connectivity through multiple carriers and via on premises meet-me facilities for gigabit-speed Ethernet access to carriers and cloud providers. In combination with bespoke server and storage solutions designed by the enterprise, a colocation facility can deliver an edge computing solution to a metro area or specific geographic region with the ability to be physically replicated as needed to other areas.

Download the full report Hybrid Cloud, courtesy of NTT to learn more about how workloads are continuing to shift between data center, cloud, and colocation. In our next article, we’ll look at the benefits and limits of data center, cloud, and collocated solutions. Catch up on the last article here



More >> From Data Center to Cloud and Back: Cloud Repatriation & the Edge
Featured Data Centers