Data Center Design for Flexibility: A Framework for Success Today and in the Future
Data Center Design for Flexibility: A Framework for Success Today and in the Future
In this edition of Voices of the Industry, DataBank’s Eric Swartz, senior director of infrastructure engineering, examines how continued demand for computing power is changing data center design today and what data centers will look like in the near future.
One thing is clear: The demand for computing power will continue to increase as companies in all industries look to meet their customers’ expectations through more innovative application of digital technology. This has real implications, not just for the technology in use, but for the physical aspects of the data centers where applications and workloads are hosted.
The entire industry is quickly changing to keep pace. Many vendors are already rethinking past approaches and accepted norms to implement more intelligent aspects of data center design and use. Examples of these shifting considerations include factors such as density, cooling, energy management, sustainability, and even the ways data centers are operated.
Let’s take a closer look at each one of these topics and examine how they’re changing in response to new trends and shaping the evolution of data center design, now and in the future.
Increasing Density and Power Demands
When executives are asked what they want to know about their data centers, chances are good that they focus on whether applications are available, will they perform as advertised, and what is the cost. However, they often don’t ask about another important factor—density—which has a direct impact on cost, performance, and availability.
Density is the amount of power a data center can provide within a cabinet, and what is required to run the application (servers and networking gear). The greater the density, the more compute resources that are available and the faster applications and workloads can run.
The drive for better performance—combined with the drop in price for high power compute—has led many IT departments to move to denser server systems. Whether these are 1U servers, blade servers, or even hyperconverged systems, it’s now normal to see rack densities at 10kW+ of power (up from 3-5kW per rack).
On top of this, the extreme growth of data and the demand for analysis of Big Data has increased dramatically. Just think of all the modern services performed on the vast amounts of data that users currently store: contextual searches of their photos, scrubbing of real-time videos for offensive content, analysis of shopping patterns, and targeted marketing to name a few.
These services and many others all run extremely compute-intensive artificial intelligence, which demands high power consumption hardware. This has led to special HPC (High Performance Computing, i.e., supercomputers) hardware that can drive power consumption at the rack level to 20kW, 50kW, or even 100kW and above.
Universal Data Hall Design: The Need for Flexibility
As power consumption demands increase, it leads to larger and heavier equipment and cabinets, a trend that is already changing data hall construction. This is especially true with hyperscale cloud providers, although we’re starting to see enterprise-level companies move in this direction, too. In the not-so-distant past, the standard requirement was that floors had to support 2,000-pound racks. This quickly changed to 3,000 pounds and even 5,000 pounds now as many hyperscalers look to future-proof their hosting and compute environments. These weights would have been unthinkable just a short time ago.
All of this has a connection to flooring options in the data center. For years, the majority of data center customers wanted raised floors because they made it easy to configure and control the air flow to each cabinet. Yet, raised floors may not be able to support the increased weight, causing many companies to pivot to concrete slab flooring. One way to address this is through a concept we’re calling universal data hall design.
In a universal data hall design, the base design is slab floor with perimeter cooling, great for large full hall deployments typical to hyperscalers. If there is demand for even higher density, technologies like rear door heat exchangers or direct chip cooling infrastructure can be piped in with minimal effort. In the other direction, if there is still a need for more traditional enterprise deployments, the data center owner can easily add raised floor to accommodate multiple customer requests of varying cabinet sizes, layouts, and other options. This universal approach provides the flexibility needed to accommodate pioneering customers while still supporting the late adopters during this transitional period.
Cooling
Packing a lot of power into less space can lead to cooling issues. Most data center providers can provide the power needed for high density, but they still must cool the generated heat. It doesn’t do any good to provide 2MW of power into a data hall if the supporting infrastructure can only cool 1MW. The same is true if there’s not enough airflow to properly cool the equipment within individual racks.
The entire industry is likely to evolve away from perimeter cooling and will adopt technologies like rear door heat exchangers, direct chip cooling, and immersion cooling as density continues to climb.
As the entire industry continues to look for more energy-efficient alternatives, it should invest in equipment that offers more competitive energy performance and proven reliability. Some manufacturers now offer advanced chiller equipment that uses better compressor design with variable speed controls in conjunction with free air economization optimized to significantly reduce average annual electricity consumption. For example, the latest air-cooled chilling systems can now enhance uptime and reliability as well as energy performance, without the need for traditional water-based evaporative systems.
The industry could also benefit from new ways of thinking, especially the expectation that data centers must be cold (usually 68-71°F). While some customers using traditional thinking still profess a preference for colder temperatures—“I want to feel cold”— modern IT equipment can easily run at higher temperatures than they currently do—for example, eight to 10 degrees hotter—without any significant sacrifices in system reliability. This alone could deliver significant power savings related to power consumption of the chillers.
Onsite Power Generation
Relying exclusively on utility power means a data center’s power costs will rise as overall demand rises. Being able to offset those costs at peak times with onsite power generation is a way to reduce total costs while improving overall efficiency.
At DataBank, we accomplished this in one of our Atlanta data centers. We partnered with a local utility (Georgia Power) to deploy a 1.5MW microgrid to support Georgia Tech’s High Performance Computing Center (HPCC), which is housed in the Atlanta data center. The microgrid runs in parallel to Georgia Power’s grid as an additional power source. It senses power demand and can provide power to the facility whenever needed.
The installation includes fuel cells, battery storage, diesel generators, and a natural gas generator, but it is also adaptive to new and additional distributed energy sources. It will be able to accommodate microturbines, solar panels, and electric vehicle chargers in the future.
Sustainable Energy Sources
There’s a bit of a catch-22 in the data center industry today. Customers want more power, yet, they want it to come from the greenest energy sources possible. “Give me more power, but make it clean,” they seem to say. Many are even requesting detailed information about the data center’s energy consumption and overall carbon footprint.
As a result, there is a movement to transition away from power sources such as coal and natural gas toward electricity generated from renewable energy sources such as wind and solar. A growing number of data centers are already implementing various renewable energy strategies, and we believe more will continue to develop their own plans in the near future. For instance, DataBank’s parent, DigitalBridge, has directed all its portfolio companies to be carbon neutral by 2030.
In delivering on this mission, we have nine data centers that are powered by 100% renewable sources. For example, our two Indianapolis data centers moved 100% of their combined electricity usage into a voluntary program that directs Indianapolis Power and Light to purchase renewable energy from wind farms and other midwestern facilities.
The Future of Data Centers: Design for Greater Flexibility
So many factors are changing our perceptions of data center design and use, especially when it comes to power consumption, energy sources, and customers’ ever-changing requirements. Understanding these trends and their implications can help data center operators anticipate future needs and make more informed decisions to design more flexible —more successful—data centers today and in the future.
Eric Swartz, DataBank’s senior director of infrastructure engineering has 19 years of experience in data center operations and infrastructure. Contact DataBank to learn more about their data center design philosophy.
More >> Data Center Design for Flexibility: A Framework for Success Today and in the Future