Air v Liquid – Part 4 EcoSystems (From our blog 30/12/2019)

Part 4


Following on from the previous articles, and over a year late! we're now going to look at the relative costs of providing an ecosystem for IT equipment, essentially the rationale for data centres.


I think its important to recognise that back in the past, the delivery of IT systems was a lot different to the way we do it today, but it does have a bearing on data centre ecosytem architectures.

Back in the day, business used a central mainframe and dumb terminals, the main frames were heavy bits of kit and I can remember some installations where floors were strengthened to take the weight, thus rooms in buildings were specifically used for IT equipment, cooling solutions were installed and Bob's your uncle, you had a computer room.

These were normally over provisioned to allow for expansion and I've personally been asked to build a new room that needed to cover the existing kit plus 100% expansion.

Well, thats all very well, but 100% of what exactly? floor space, power density, network capacity, cooling capacity? Normally, everything was doubled up, just to cover ourselves, but it was never going to be enough. Why? because IT was getting smaller, more equipment was needed, power densities rose, more network was needed. So these rooms soon became not fit for purpose and for a variety of reasons, insufficient cooling capacity, not enough power, in some cases not enough space.

So, IT managers were in a dilemma, without visibility of IT needs moving forward, it became impossible to provide expansion space without spending a great deal of capital in future proofing (with the risk of getting it completely wrong) or failing to meet the business requirements.

I've seen row upon row of racks, all empty because the business decided to use blade servers, which of course have a high power density that standard servers, and there wasn't sufficient power available so power was taken from other racks, rending them useless, this of course leads to hot spots because you've concentrating your IT (a blade chassis is about 7.5kW) into an area that was designed for a standard 2kW rack.

Today, business has other options than to keep their IT on premise, they can use colocation facilites or cloud services but they will still need a room on premise to provide networking access to the colocation/cloud services and they may have some on site compute (those services that can't go into the cloud for reasons such as latency or data transfer rates),

All we've done though, is transfer the problem of the ecosystem to someone else, now its the colocation provider that has to think about capacity, in terms of space, power and cooling and the thing is, is that they are always behind the curve, insofar as they are reactive rather than proactive, they respond to customers requirements in a building that was designed in the past, with the pasts intepretation of power, space and cooling requirements and that leads to the same problems. i.e. a lack of power, problems with cooling, and the risk of having empty racks.

Its understandable though, if you are a colocation or hosting provider, you dont have crystal balls to see into the future, so you have to deal with what you know or you can take a gamble on what the future looks like.

The future, to them is very much like the past, insofar that if 99% of systems are designed for air cooling then an air cooling infrastructure is what they will build.

Hence, the market is dominated by air cooled systems, and so we should build for air.

Building for air means, a raised floor (perhaps), it means CRAC/H's, it means pipework, it means chillers, or external units, in whatever flavour you desire, but you have to provide an infrastructure for what the market needs, and at the present time that is air.

But it doesn't have to be that way...

The data centre of the "future" is, very much like the data centre of today, given that we are building them today (as discussed with my friend and colleague Mark Acton) however, what would the data centre of the future look like if we did adopt some of the more outlandish suggestions coming out of academia and some design consultancies and what if we decided to adopt more liquid cooled options?

In November I attended the DCD London event, where not one but two immersed liquid cooled solutions were on show, both using the single immersion technique (this is where the server is immersed into a bath full of an engineered (non dielectric) liquid, the heat generated by the servers is carried by the liquid to the top of the bath and transferred via a heat exchanger to an external water circuit, this is then connected to a external dry cooler and the heat vented into the atmosphere, but when compared with an air cooled solution, we see that some of the capital plant items, namely the floor (baths dont need a raised floor), and CRAC/Hs are moot, as a result the capex and opex costs will be lower.

But, we can go one step further and get revenue, thus potentially reducing our costs even further. How?, simple, the heat rejected by the system is warmer and in a medium where it can be captured better than air and thus directed to provide, or offset energy use elesewhere, such as hot water or heating locally (within the building) or passed to a low temperature district heating system for use over a wider area. There are some commercial aspects that need to be ironed out with this approach, such as contractual agreements, cost, and service levels etc.

This approach, where waste heat is used to offset energy requirements elsewhere, is a fundamental aspect of Green Data Centres and from our research it appears that liquid immersed systems can contribute, and we're not the only ones thinking this..

The whole concept of data centres as engaged players in the energy transition towards the decarbonisation of society is within the remit of the EU funded Catalyst project http://project-catalyst.eu/

So, in terms of capital and operation costs of air v liquid where do we stand ?

There are in effect 3 types of cooling for data centres, the first is using a chilled (or cold) water loop, this basically transfers the air cycle heat to liquids in the CRAC unit which are then pumped to a chiller where the retained heat is dissapated into the atmosphere.

The second is to use evaporative cooling, wiki provides good content on how evaporative cooling works and this is the text

"An evaporative cooler (also swamp cooler, swamp box, desert cooler and wet air cooler) is a device that cools air through the evaporation of water. Evaporative cooling differs from typical air conditioning systems, which use vapor-compression or absorption refrigeration cycles. Evaporative cooling uses the fact that water will absorb a relatively large amount of heat in order to evaporate (that is, it has a large enthalpy of vaporization). The temperature of dry air can be dropped significantly through the phase transition of liquid water to water vapor (evaporation). This can cool air using much less energy than refrigeration. In extremely dry climates, evaporative cooling of air has the added benefit of conditioning the air with more moisture for the comfort of building occupants.

The cooling potential for evaporative cooling is dependent on the wet-bulb depression, the difference between dry-bulb temperature and wet-bulb temperature (see relative humidity). In arid climates, evaporative cooling can reduce energy consumption and total equipment for conditioning as an alternative to compressor-based cooling. In climates not considered arid, indirect evaporative cooling can still take advantage of the evaporative cooling process without increasing humidity. Passive evaporative cooling strategies can offer the same benefits of mechanical evaporative cooling systems without the complexity of equipment and ductwork."

Some social media and search engine hyperscalers use this type of cooling technology.

The third is Emerging liquid technologies and include "liquid to chip", cold plate and immersive.

Liquid to chip and cold plate in effect are extending the chilled water loops into the rack, and in the case of liquid to chip into the server.

Immersed technologies however are a very different kettle of fish.

This is where a server is actually immersed into a non dielectric fluid in either a single mode (direct bath) or dual mode (server is encased in a blade type enclosure filled with the non dielectric fluid and installed into a chassis with the liquid cooling loops).

The heat transfer is made to the fluid and then via a heat exchanger to water and then to a dry cooler or other mode of use, these are the waste heat reuse scenarios often discussed, heating office areas, resdiential heating, swimming pools and greenhouses.

An air cooled data centre needs the following:

Raised Floor (not always)

CRAC/H's

Chiller (or dry cooler, other method of rejecting heat)

Power train (HV/LV boards, PDU's)

UPS

Batteries


In a immersed liquid data centre, you reduce some of these elements as follows:

Raised Floor (we dont need to pump air under the floor, but you still might want to run power and network cables under the floor (but we're seeing a lot of overhead cable routes now so maybe not!))

CRAC/H's are not required

Chillers are not required, although if you dont have a easily available user for your waste heat, you might want to include a dry cooler for summer running

Power train - Most Immersed Units are already equipped with full 2N power, and only need a standard connection.

UPS would still be required but as you're only going to need it for power and not cooling, you can downsize it.

Batteries, again you can reduce the amount of batteries needed.


All in all, we think that moving to an fully immersed solution could save around 50% of a standard data centre build costs, couple that with reduced operating costs and your data centre is already saving lots of money, consider the CATALYST project and you may even begin to make money from selling that waste heat and providing grid services.

We geniunely believe that in the future ALL data centres will be used immersed technologies and be integrated with smart grids and that the CATALYST project will do EXACTLY what it says on the tin!

Thats the Air v Liquid skirmish put to bed, and we've been a strong supporter of the technology since 2010 when we saw the first immersed demo unit from ICEOTOPE, since then we've been following and writing about this technology in a number of articles, one of which was an update from the original article, I recall, Martin from Asperitas telling me that I would need to update it sooner rather than 2021 and I think he's right, so look out for that update to an update!!

In this next installment we'll cover the companies that can provide immersed compute technology and places where you can visit and see the tech for yourselves (on appointment of course and subject to Covid-19 restrictions.)