With greater data demand, the traditional solution to ensure data center reliability has been to plan for accessing more electrical power in an approach called overprovisioning. But new thinking on providing more power in a sustainable way now focuses on providing better visibility into energy demands and reviewing how the power is distributed within a data center. It's also an approach that can provide a competitive advantage for data center operators.
2020 has seen many of us adapt and embrace new working practices and lifestyle changes. The shift from cash to cash-less transactions has accelerated, with financial institutions reporting a 250 percent increase in the first quarter of 2020 alone, and one credit card brand reporting a 69 percent increase in contactless payments in the United States since January 2020. A majority of small U.S.-based businesses report using more data-driven analytics this year, and the pandemic has fast-tracked digital transformation across a broad spectrum of companies.
As a result, data centers are being tasked to supply greater and more frequent data and analytics, with one estimate putting that increase at nearly 50 percent. The vast majority of data center operators are planning increased capacity, whether in the form of expanded capabilities at existing facilities or the addition of new ones.
For data centers to handle the bigger workloads with faster, better, more frequent insight, and more security and reliability, they need more electrical power...at least according to traditional thinking. But do they? Here are three reasons why better insight into how energy is consumed in the data center might be, well, better and enable data center operators to more effectively manage their power consumption and costs.
Overprovisioning was never the right solution
A long-held belief is that to maximize the design load for each system within a data center it was necessary to create double the asset (generator or other system component), otherwise known as "overprovisioning".
This results in equipment that is either lightly loaded and not fully utilized (in the case of redundant UPS (Uninterruptable Power Supply) systems) or used so little that reliability becomes an issue when called upon (in the case of switches and generators). For example: if you have 800kW of load, then operators may have two 1MW UPS systems feeding it, with each taking the full load PLUS an overload of 25 percent.
This means that you end up with two UPS systems (both connected) loaded at 400kW which is inefficient from a financial point of view as well as capacity.
“In the past overprovisioning was a “necessary evil” in order to meet the rigid reliability requirements. But insights into the power usage and health of an asset can make overprovisioning a thing of the past and provide the efficiencies and sustainability today’s data centers require,” explains Brian Johnson, ABB Global Data Center Leader.
The industry has been working to understand the dynamics underlying the challenge of provisioning and power use with a benchmark developed by the Green Grid called Power Usage Effectiveness, or “PUE,” which is somewhat of a "miles per gallon" type calculation to measure how efficiently power is used.
Data centers can reduce PUE by raising temperatures (less cooling) and using more efficient servers (80+ power supplies) as well as not running servers on idle. In designing power systems, using this thinking on topology, means you don’t see as much power redundancy thanks to cloud computing.
Like a handicap in golf, a lower PUE number (which is a ratio) is better, and some data center operators have achieved a PUE of 1.2. Getting there isn’t easy, though, nor is moving the needle across that final stretch, as the devil is always in the detail of where, how, and when the power gets used. PUE is a goal wrapped in a challenge, which is why energy remains a data center’s biggest variable cost.
Careful energy monitoring allows insight into the systems that haven’t been optimized, such as power for cooling usage, idle servers, inefficient fan motors, improper settings on drives and overheating transformers. This insight helps to make better decisions on running the data centers assets.
The peril and promise of the cloud
One solution to the PUE challenge has been to shift back-up systems from hardware to software and move duplicate backup systems or components to the cloud. In a cloud environment, you can have two physical data centers, each a mirror of the other. In each data center, the electrical equipment can be “less” redundant than it would need to be, if we only had one data center. This approach therefore uses less costly and less complicated equipment.
Virtual compute such as the above, and more recent innovations like composable infrastructure provide the capacity for immediate activation while requiring less redundant and overprovisioned power schemes.
But with the ever-increasing appetite for data and compute, the coming wave of 5G, 4K delivery networks and the immense information-based actions like control of heating or lighting for smart cities, this won't be enough. The question continues to be not how data centers can arrange for greater amounts of power availability, but rather how they can have the visibility into those power needs and insights enabling them to forecast how, when, and where they'll be required.
One way to do this is through using smart components like ABB’s Tmax XT circuit breakers, that can bring power measurement and analytics closer to the mission critical loads.
Utility connections are one key
Getting higher voltage connections by physically locating facilities closer to power sources has emerged as a tactic as data centers have grown in power- since higher voltage connections are less prone to outages and provide power at a lower cost, which was the initial reasoning. Considering the risks of outages can be significant – as is often widely documented in media – the ability to avoid such interruptions make this approach both highly desirable and, as a result, competitively difficult to secure.
The lack of those coveted connection spots combined with the inherent inefficiencies of overprovisioning and the risks of over reliance on virtual twins demands other responses to meet the growing electrical needs of data centers to "level up".
An effective way to do this is with ABB's digital switchgear, which features sensors instead of traditional analog devices and allows for ease of customization, space savings, and greater safety. The control and protective communications enabled by the intelligent electrical devices ("IEDS") are based on IEC61850, a non-proprietary peer-to-peer digital communications solution, making it a useful and upgradable solution.
Opportunities to push the furniture around
IT products and services are typically updated regularly and are broadly on three-to-five-year upgrade cycles. Each of these milestones are useful prompts for data center operators to not just explore novel innovations but revisit past decisions on products, services and use-case experience.
“These events are a good opportunity to push around the power furniture, too,” says Johnson. “The key is to have the visibility into the systems so you can identify those opportunities.”
Further, the process of delivering updated visibility and management of data center systems also comes with sustainability benefits. Put simply, a better managed system uses resources better (like matching power needs to use requirements) and improves operational uptime, power availability and less overall consumption.
Ongoing management of systems yields more proactive identification of potential component failures that can be fixed and therefore avoid delays or downtime for unexpected repairs.
This operational effectiveness is no small benefit to data center operators. Its guiding philosophy, sometimes called "failing small", is to achieve deep component visibility all the way into lowest, least complex locations in the system. Failures at this level don't necessarily risk pulling down the entire system but can result in frequent and/or unexpected operational degradation. This has proven a challenge to accomplish, however, as the smaller locations, or nodes, are the least likely to contain “smart” technology.
ABB Ability™ Energy and Asset Manager enables the collection of relevant information from ABB devices installed in low- and medium-voltage power distribution system. It also combines data based on environmental parameters (temperature, water, gas). These devices can be connected, by easy plug and play functionality by sharing peer to peer data using latest communication protocols. This not only future proofs the data center but offers built-in visibility into power metrics to ensure ease of installation and immediate functionality throughout the power distribution system.
“Operators will say 'I’d rather lose a server than a rack, a rack than a row, a row than a hall, and a hall than the data center.' We have the smart oversight tools that provide the visibility necessary to realize a visibility strategy into the entire data center system from device to server and this means that operators can ensure they can meet the growing needs of their users,” Johnson adds.
Data centers have grown from an ‘IT support system’ to an on-demand scalable service, a truly mission critical industry that enables economies to keep working and families to stay connected. By adopting a focused approach, and investing in the most effective technologies, data center operators can make the most of the opportunities the new ‘decade of data’ will bring and make every watt count.