Enterprise and Hyperscale Requirements Have Converged
Enterprise and Hyperscale Requirements Have Converged
Chris Bair, Senior Vice President of Sales and Leasing at Stream Data Centers, highlights how enterprise and hyperscale data center requirements have grown more and more similar over the past two decades.
Today, enterprise data center requirements and hyperscale data center requirements are quite similar. But that hasn’t always been the case. What’s behind the convergence? Evolution, of course.
Come back with me a moment to 1999. For those of us of a certain age, it might not feel so long ago. But consider that global internet traffic volume was about 0.075 exabytes and the cloud as we know it was just a twinkle in Marc Benioff’s eyes. My hair was still blond, and enterprise data centers were still mostly tethered to corporate headquarters.
Due to network cost and performance constraints, enterprise data centers were typically on-premises, built for maximum redundancy, and from an investment perspective were 15-20 year financial commitments. These complex, monolithic internal data centers protected mission-critical applications that were often ‘live’ in in only one location (the corporate data center) with warm/cold disaster recovery options available at an offsite location.
A few things have changed in the decades since then.
Today, global internet traffic volume is about 1,935 exabytes – almost 26,000 times higher than in 1999. The public cloud alone is a $305 billion juggernaut and 92% of enterprises run at least some workloads in the cloud. My hair is now mostly grey and enterprise data centers look a whole lot like hyperscale data centers. Network and application capabilities have evolved and data center deployments have too.
In enterprises’ drive for efficiency, resiliency, and value, they’ve effectively started to deploy how and where hyperscalers do. Enterprises are no longer tethered to particular data center markets or particular architectures. Today, efficient, resilient, and cost-effective data center architectures are available for both enterprise and hyperscale customers. As a result, both enterprises and hyperscalers can deploy in the markets that offer low-cost power, lower geographic risk, government incentives, and other benefits.
Enterprises are Now Deploying How and Where Hyperscalers Do
Changes in technology and a mandate to efficiently and flexibly deploy IT infrastructure have changed the location and often the number of new enterprise deployments, and the design of those deployments. For example:
- Network is ubiquitous, reliable and cheap, which frees enterprise users from being forced to deploy enterprise applications within the physical data centers located on their headquarters campus. It used to be that the firewall was literally a fire-rated wall separating physical sections of an enterprise data center. Today, distributed workforces can securely access their company’s SaaS and enterprise systems from around the world.
- The majority of enterprise applications support multi-site replication and enterprise users are now able to ‘act like a hyperscaler’ by deploying applications across geographically disparate data centers at a lower total cost per MW than the previous generation’s Tier IV-style data center. The end result is better resiliency, lower TCO, and low/no downtime or loss of data in the event of the loss of a production data center.
- Enterprise users are sophisticated buyers. The same characteristics sought after by hyperscale users such as tax efficiency, reasonable labor costs, renewable and inexpensive power also drive enterprises’ site selection decisions. It’s why Ashburn, Dallas, and Phoenix are among the fastest-growing data center markets, with deployments by enterprise users that are nowhere close to their respective corporate headquarters.
- Enterprises and hyperscalers alike are incorporating design changes – especially for new ways of driving energy efficiency – to reduce environmental impact and operating costs while embracing renewable energy options.
- In general, both enterprise and hyperscale users have increased application availability by buying into the strategy that unnecessary data center complexity is rarely better. If the data center were an off-road vehicle, many enterprises have come to prefer the Toyota Land Cruiser over the Range Rover. The two vehicles do the same thing but one of them costs less, lasts longer and is easier to maintain and operate; while the other has a lot of bells and whistles.
These changes have not come at the expense of resiliency or security. Even with reduced complexity and lower cost, data center designs like Stream’s are designed to meet IEEE standards for delivering 99.9999% (six 9s) of uptime. As an industry, the network and applications improved and we’ve become smarter so we build with less complexity but more resiliency than ever. Enterprise users can get to market quickly, be hyper-reliable, and be efficient. They can have the flexibility that comes with not having to make a 15-20 year investment and can drive efficiency without hurting resiliency.
Colo Providers Deliver Cloud-Like Flexibility
Enterprises are migrating many workloads to the cloud, and it’s not only because of TCO savings. Cost is a factor of course, but the most important cloud driver is the unparalleled flexibility enterprises gain by moving to usage-based services. That’s why, for the workloads they’re not moving to the cloud, enterprises are engaging with colocation providers – to emulate the cloud’s usage-based flexibility. Enterprises no longer have to double-down on internal data center assets where infrastructure upgrades (or new construction) represent a 15-20 year commitment.
Come back with me again to 1999. That was also the year Rob Kennedy and Paul Moser, then executives at Stream Realty Partners, founded Stream Data Centers to build upon the success they’d had identifying and purchasing second-generation data center assets. In the decades since, Stream has developed a total of 24 data centers. True to our roots as real estate professionals, we have proactively invested in the markets we knew our customers – enterprises and hyperscalers – would need capacity in, including Dallas, Austin, Silicon Valley, San Antonio, Houston, Minneapolis, Chicago, Phoenix, and Northern Virginia.
Now providers like Stream don’t have to serve enterprises and hyperscalers separately. Serving each group has set us up well to continue to serve both as their requirements converge. Stream, for example, was founded in 1999 to serve the large data center needs of large organizations. Today, 90% of our capacity is leased to the Fortune 500. Two decades ago that meant the world’s largest banks and healthcare providers – and it still does. Today it also includes the world’s largest and most sophisticated technology companies.
Chris Bair is Senior Vice President of Sales and Leasing at Stream Data Centers.
More >> Enterprise and Hyperscale Requirements Have Converged