
At the 2025 Data Centre World event (12th & 13th March, London), speakers addressed the critical importance of cooling in data centre developments. One of their key points was that, rather than being a secondary consideration after factors such as location and power access, cooling should be included as a primary factor when planning, designing and delivering data centres.
The growth of AI data centres in the UK (See our report: The AI boom and the energy equation) is creating an urgent need to think about how these buildings can be delivered quickly and effectively. What’s also becoming clear is that, without the right type of cooling, AI data centres growth could be seriously impacted.
Chairing a session on AI’s impact on data centre design, Mark Acton, head of technical due diligence for the Data Centre Alliance, noted: “We’re at the thin end of a wedge. We don’t know what’s coming for the sector. It’s a revolution not just in how we use AI, but how we house it.”

The main reason for the growing impact of cooling systems in this sector is the rise of AI and its use of GPUs (graphical processing units). These small but powerful chips process data at lightning speeds, and as a result operate at high temperatures.
To make the most of their investment in this innovative technology, developers must increase rack density – the more chips, the higher the potential productivity of the data centre.
As result, the industry is seeing new approaches to cooling, including the increasingly popular liquid cooling. There are three main categories of liquid cooling:
* Direct to chip – liquid coolant is circulated directly over heat-generating chips.
* Rear-door-liquid cooling – the rear doors of the cabinets that hold the racks are replaced by a liquid heat exchanger that reduces the temperature of the air circulating close to the chips. The air is reintroduced into the server room.
* Immersion cooling – a cutting-edge approach that sees servers submerged in a dielectric fluid for direct cooling.
Coolant Distribution Units (CDUs) are the heart of the liquid cooling approach, precisely managing the flow and volume of liquid coolant.
Each approach has particular benefits. For example liquid-to-air heat exchangers capture about 70% of the heat from chips and eject it directly into the data centre server room. Speaking at Data Centre World, Dominik Dziarczykowski, market development director for high density and liquid cooling for Vertiv noted: “It’s a good solution for rack power density and cheaper than a full-liquid approach. However, you need CRAC units to remove heat from the server room.”
One of the most significant points to bear in mind is that cooling systems must be considered from an early stage in a modern data centre project. This is to ensure that exactly the right technology selection is made to suit the immediate and future needs of the building and the IT equipment in it.

As Dziarczykowski said: “There is no liquid cooling standard on the market, so you need to think about the products that you are specifying and ask what technology will be right for your needs.”
He also highlighted some of the key cooling design issues to consider, particularly in AI focused data centres using GPUs. These not only run at high temperatures, but they are also prone to spikes in energy use and heat output.
Cooling distribution units must therefore be able to respond quickly. For example, Vertiv has used the approach of having several CDUs working together as a single unit to provide a faster and more accurate response to changes in cooling demand.
“You need precise temperature control and automated adjustment to keep up with temperature spikes,” said Dziarczykowski. He added that: “Redundancy in the system is key. This applies to filters, pumps, sensors and variable pipework layouts.”
Another factor that’s critical for liquid cooling is water flow. The general rule for required water flow is 1.5l per minute per 1kW, so the system needs powerful pumps.
Water quality should also be considered, with a focus on flushing, regular testing, topping up and avoiding air in the system. Dziarczykowski also pointed out that piping is a critical element of liquid cooling, and the material used must be extremely high quality such as pharmaceutical or food grade.

This point creates a useful illustration of how cooling in modern data centres is so integral to the building design and operation that it’s blurring the lines between FM and IT.
With water flowing around the building and into the IT server rooms, allocating clear lines of responsibility between FM teams and IT is a must. As Dziarczykowski put it: “Everyone is a plumber and fluid networks are no longer exclusive to the FM team.”
AI data centres are not the only part of the sector seeing change. As Dziarczykowski pointed out: “Rack density is growing in hyperscale and enterprise data centres.”
Developers and operators are making the most of the space available in all sizes of data centre. This trend is seeing wider and heavier racks used in data centres, along with additional power to each rack which need more busbars and connections. So there is more equipment generating more heat.
Although liquid cooling will dominate the GPU-dependent AI sector, air cooling will still be a necessity for many existing and future data centres. In fact, the hybrid approach is common. For example, the new Kao Data project in Harlow will use liquid cooling alongside traditional air-cooled servers to optimise the benefits of both.
The AI revolution is impacting the very buildings that house the data centres that drive it. Even experienced data centre developers and operators are now facing decisions about new cooling technologies, with little room for making errors. One thing is clear in this highly technical area: working with experts is going to be critical for successful project delivery.
Useful links: For further insights into liquid cooling technology: