In 2020, the world generated a gigantic 40 zettabytes (ZB) of data – that’s 1.7 MB of data per second, per person for everyone on earth. While the spike in data generation could be attributed to the pandemic, which had most of us working and studying remotely and using the internet more for entertainment and keeping in touch, it’s part of a wider global trend for data consumption which shows no sign of abating.
By 2025, data creation will have grown to more than 180ZB and storage of this data will grow at a Compound Annual Growth Rate (CAGR) of almost 20% during the same period.
To keep up with this exponential growth, data centers need to be able to expand and grow quickly. To do this in the most effective way, there are three key considerations that operators should be looking to feed into their future plans. Let’s start with sustainability.
Growing in a sustainable way
Meeting growing global demand for data while cutting carbon emissions may feel like it’s pulling data center operators in conflicting directions, but it’s a must if the data center sector is to grow in a sustainable way.
Sustainability is top of the agenda at the moment (given COP26) but it’s not something that is going away anytime soon. With societal and regulatory pressure mounting on businesses to reduce carbon emissions, data centers must be looking not only to meet their current local regulations but to exceed them. This is the way to future-proof data centers from new regulatory changes and tighter restrictions further down the line.
Data centers can reduce their energy by focusing on highly energy-efficient solutions, limiting or reducing diesel genset usage, and monitoring and controlling their energy use more effectively.
In terms of energy efficiency, there’s plenty to be done on the power front. More than 50% of the power required to run a server is used by its central processing unit (CPU). Most CPUs have power management features that optimise power consumption by dynamically switching between multiple performance states based on utilisation. Therefore, by dynamically ratcheting down processor voltage and frequency outside of peak performance tasks, the CPU can minimise energy waste.
Power distribution should also be considered. Virtually all IT equipment is designed to work with input power voltages ranging from 100- to 240-V AC (in accordance with global standards), and the general rule is the higher the voltage, the more efficient the unit. But by operating a UPS at 240/415 V, three-phase, four-wire output power, a server can be fed directly and achieve an incremental 2% reduction in facility energy use.
If the budget is available, data centers should also consider the benefits of plugging into the Smart Grid. These enable two-way energy and information flows to create an automated and distributed power delivery network. Data center operators can also install green power sources within their facility in the future, such as hydrogen fuel cells, which would significantly reduce energy use and emissions.
In addition to the above, data centers could explore more efficient cooling systems to save energy using ideas such as segregation, non-evaporative cooling to raise the temperature in the data hall or the installation of rear door heat exchangers. The use of low harmonic drives also provides energy savings in cooling with minimal impact on power quality for network efficiency.
Other ways to reduce energy use include considering fitting battery energy storage systems, drive consolidation and minimising idle IT equipment with distributed computing. Virtualisation programmes can also improve the utilisation of hardware to enable a reduction in the number of power-consuming services and storage devices to improve server usage by around 40%.
It’s also worth noting that a data center’s green credentials don’t begin and end at the front door. An operator’s supply chain should also be reviewed to see whether it’s possible to specify more sustainable products and services from third parties and suppliers in the future.
Scalability: Building capacity, one step at a time
It’s fair to say that in the past, data center providers have favoured a ‘building for tomorrow’ approach, installing vast data centers from the ground up or adding large scale extensions to existing locations. However, this requires considerable upfront costs and if the space built is not leased straight away, there is a delay in revenue generation as well as the running and maintenance costs for empty server rooms.
Therefore, another trend we will see a lot more of in the future is scalability; building data centers in smaller blocks and opening one while work starts on building the next. This approach reduces upfront investment and minimises the delay in getting revenue in, while allowing providers to secure tenants earlier, which is important in such a competitive, fast-paced environment.
Scalability allows data centers to grow sustainably with future demand and it can also simplify the specification process. For example, some scalable designs use modular, prefabricated solutions which are made offsite in an eHouse or on a skid. These are also pretested at the factory to save site work and commissioning time.
Modular builds incorporate standard blocks of power, repeated throughout, to allow for easy future expansion. The standardisation of design improves operational reliability but it’s important to note that designs must still be flexible to adapt to different site requirements. Switchgear, uninterruptible power supplies (UPS), power distribution units (PDU) and remote power panels (RPP) are all examples of scalable equipment.
Get scalability right and future expansions will be time and cost-efficient. In fact, our research suggests that compared to traditional stick-built data center construction projects, using prefabricated solutions can generate a 30% improvement in speed to deployment, and the use of pre-designed solutions can improve deployment by 20%. Use both prefabricated and pre-designed solutions together and operators could look at a 50% total improvement against traditional data center builds.
It’s worth noting that the recommendations above are for the current data center landscape. In the future, we will see more capacity required for machine learning (ML), artificial intelligence (AI) and high-performance computing (HPC), which require higher density nodes. In this case, more dense power zones will be needed and perhaps even the facility for liquid or alternate cooling apparatus to be fitted to reduce energy use.
Digitalization: A brighter future
Digitalisation is arguably the biggest of all the future data center trends. As an overarching solution, it can contribute positively to both sustainability and scalability as well as the effective and efficient running of tomorrow’s data centers.
As a minimum requirement, data center operators should be installing digitally ready or digitally enabled equipment in the here and now, as this will give them the foundations for their future digitalization journey – even if they are not ready to embark on it in the short term.
One of the key aspects of digitalisation is to ensure systems are interoperable so that they share a common language. Open control protocols like BACNET and IEC61850 ensure easy communication between equipment and will work together to improve a data center’s overall performance.
Getting digitalization ready also enables providers to take advantage of future advancements in technology such as remote services like augmented reality and predictive maintenance features which allow issues to be identified (and resolved) quicker. This in turn saves money vs “break fix” or calendar-based maintenance. By putting the emphasis on preventative maintenance, digitalised data centers can focus their technicians’ time on maintaining critical equipment.
Digitalization is also the basis of the energy monitoring and controls needed to improve energy efficiency. After all, you can’t manage what you can’t measure. Digitalization provides insights on where energy is being used, allowing data center operators to optimise usage, avoid waste and establish a more sustainable operation.
Digitalized equipment also simplifies scalable data center designs as it reduces the number of connections and wiring required for an installation by 90%. Therefore, it is easier to expand and scale up your switchgear.
The world needs big solutions to meet the global demand for data. If data center providers are to keep up, they need to adapt and expand quickly using the future trends outlined to make their offer agile, efficient and ready for what’s coming next.