When data center technologies collide

Why you need a DCIM system with coordinated controls today, instead of waiting for a software defined data center (SDDC)

“Software-defined“ is a new buzzword around the data center industry. It started with software-defined networking, branched out to software-defined storage, merged with server virtualization and voilà—the data center has been virtualized. People are embracing the concept that promises to streamline application deployment and automatically provision and re-provision based on the fluctuations in IT load.

The software-defined data center meets the reality-defined facility

But hold on a moment. I‘ve been in a lot of data centers and not one of them has given any hint that it might be software-defined. I‘ve seen a lot of servers humming away , performing their invisible tasks, and perhaps living a virtual, software-defined existence. Similarly for networks and storage, how they segment and organize their network packets or disk sectors to support more flexible ways to allocate resources seems like a largely software-defined problem that is ripe for a software-defined solution.

But, what about the physical infrastructure? For instance, the chillers and CRAHs that keep those servers cooled, or the switchgear, transformers, UPS and PDUs that keep everything powered. I have yet to see a software-defined air economizer and I don’t hold out much hope of ever encountering one.

The interface between the virtual, software-defined world and the real, physical, error-prone world is one which is necessary to think about whenever you plan to deploy any level of data center management. In particular, any system that promises to streamline and automate the time-consuming and error-prone processes that IT departments deal with today is ripe for the application of the law of unintended consequences.

Creating a software-defined data center that deals with the “brains“ of the data center (i.e., the IT infrastructure) without incorporating the underlying physical systems reminds me of creating an operating system without any notion of the computer it is running on. It‘s been done successfully many times, but it starts by abstracting the underlying hardware into a well-defined Hardware Abstraction Layer (HAL). The HAL provides all the mechanisms that the operating system needs to run, and if properly implemented, run extremely well.

How can DCIM help?

What software-defined data centers (SDDC) need to be successful is a well-defined data center “facilities HAL” or data center infrastructure abstraction (DCIA). The DCIA would offer a set of services that inform the SDDC about the status of the data center physical infrastructure, plus provide mechanisms to alter it. This DCIA fits squarely into the scope of a data center infrastructure management (DCIM) system, such as ABB Ability™ Data Center Automation.

Without a DCIM system, the SDDC is required to make assumptions about the current condition of the underlying infrastructure—specifically, that everything is operating smoothly and that changes will not have any adverse effect. This can be loosely translated to‚ ‘assume I am running in a Tier 4 data center with perfect 72 degree cooling throughout and unlimited, access to low-cost energy‘.

The real world differs somewhat from this, at least for most. Temperatures are not always even or consistent. Humans sometimes make mistakes and plug things into the wrong socket. Equipment fails or requires maintenance.

Let’s look at some examples of how the DCIM can help the SDDC make better choices about where to provision workload in a typical data center.

• The SDDC decides to automatically provision additional compute for a web-based application for which usage is spiking. What it doesn't know is the servers it has chosen to use are in the middle of a data center hot zone, made hotter by current high outdoor temperatures. A DCIM system can provide real-time summary information about the environmental conditions in any area of the data center. Using this information, the SDDC can choose different servers to provision.

• Similarly, the DCIM system can provide the SDDC information about power loading on the circuits or if, for example, the system is presently running on UPS or backup power .

With a facilities abstraction layer provided by the DCIM system, the SDDC now improves the reliability of the IT infrastructure by actively managing the IT load. Therefore, based on an overall “degree of reliability“ metric calculated by the DCIM system, the SDDC can decide to automatically re-provision applications away from at-risk servers, or at-risk data centers, in the event of a major equipment failure or even an approaching weather-related event.

It would even be possible to use this approach to drive cost-optimization protocols, where the DCIM system will calculate a ‘cost of operations’ metric, in realtime, based on the local set of servers, real-time energy pricing, and real-time cooling pricing (which can vary based on environmental and other physical conditions). The SDDC can then make informed decisions on when and where to allocate compute to maximize cost savings.

The SDDC and DCIM are two trends in computing that seem destined to intertwine. Innovative companies looking to deploy SDDC would do well to examine how their software-defined world and their real-world facilities intersect. A good marriage between the two will yield benefits beyond ease of deployment. Higher reliability and lower operating costs are also achievable.

Are you looking for support or purchase information?

Data center automation solutions

Select region / language