Following on from the previous 2 articles, I'm now going to look at cooling v heat rejection in the data centre environment.
I stated with some authority that cooling per se is a misnomer, to date I've not had any comments refuting my assertion so I'll continue.
Cooling (by the strict technical term is "is the transfer of thermal energy via thermal radiation, heat conduction or convection.
Heat rejection is a component part of these processes.
So, when we cool something we effectively remove heat and thats precisely what we do in a data centre.
The question, and the topic of these articles is....
Air v Liquid
If we were to look at the data centre ecosystems in almost every continent on this planet we will find that in 99.99% of cases, the medium for heat transfer is air, good old air.
And the principle reason for this is that most computer equipment, servers, storage and networking equipment is designed around the use of air as a cooling medium.
Lets look at air.
So, the main concept here is the "air cycle", for the purposes of this article we're going to start the cycle at the exit of the CRAC/H unit, but we could easily start anywhere in the cycle.
Air is pushed via a fan into a floor void at positive pressure, the air escapes into the room by the prudent placement of floor tiles (in front of the rack, please refer to the EUCOC for futher guidance!) but could easily escape from a whole host of gaps, holes and other routes (hence why the best practice recommend the stopping of all potential sources of leaks), from the tile, air is forced upwards and hopefully into the main inlet of the server, air then passes over the "heat" producing components and is exhausted through the rear of the server and moves upwards (hot air tends to rise, recall your physics lesssons from school) the air then rises to the ceiling level and may, if coerced, find its way back to the top of the CRAC/H unit. What happens inside the CRAC/H unit is a mystery to me!
Nah, its not, just jesting, the air is passed over a cooling coil and heat is transferred to the coil, inside the coil is a liquid, this is pumped to the outside unit, and disappears into the ether by a host of different methods dry coolers, evaporative, cooling towers or through a chiller, what is key is that the liquid transfers its heat to the outside source and thus becomes cooler, this liquid is then returned back to the internal unit and at the bottom of the coil is considerably cooler than the air at the top. Thus the heat generated by the IT equipment is removed and cooler air is supplied.
[NOTE: Some systems will differ in the approach and method of heat rejection but the principle is the same]
The temperatures of the air will differ depending on the desired temperature, but if you recall, average room temperatures from my students are between 18-21℃.
Lets introduce the concept of supply and return temperatures, and the control of them, so supply is the temperature of the air from the bottom of the CRAC/Hs being pushed into the room, the return is the temperature as it enters the top of the CRAC/Hs, so when my students speak of a range of 18-21℃ this may be controlled either by a supply temperature of 18-21℃ or a return temperature of between 30-35℃ which equates to a supply temperature of 18-21℃, the key being somthing called the delta t, or the difference between the cold air and the hot air.
The optimum delta is 15℃ so 33℃ return will provide 19℃ supply.
Understanding this key concept is an important point, many facilities unfortunately have AC systems that have no user controls, and the control points are set by factory to be a standard range. Some facilities operate on return temperatures and sometimes these are set quite low, which in turn means that the delta t forces the supply temperature lower, in some cases (anecdotally from colleagues, temperatures where meat could be stored, so around 5-8℃ causing a considerable amount of energy to be consumed and potentially causing problems for IT kit at the lower end of the operational range (5℃)
Got it ? Good, now to liquid..
There are 4 main liquid cooled solutions in use today, listed as follows:
1. Rear door cooling, this is where the cooling loop for CRAC/H's is extended to the rack where it meets with heat exchangers in the door frames, thus the hot air from the servers is cooled immediately before it leaves the rack footprint, normally the room itself is not cooled.
2. Cold plate, this is where the heat producing components have copper piping to remove the heat at source, this is them treated similarily to rear door cooling and the heat taken away using conventional methods, again the room is not cooled
3. Immersion (1), - this is where server motherboards are actually immersed in baths filled with a non conductive fluid and there is a heat exchanger situated near the bath to remove the heat, natural convection methods move the heated liquid to the heat exchanger. Power and connectivity are provided by a common bar. In this and the following senarios, the rooms are not cooled.
4. Immersion (2) - this is where individual motherboards are encased in a cartridge which contained the non conductive fluid and they slot into a rack with a cooling loop integrated within, valves and other connections provide power and connectivity to the board.
The key thing here is that the liquid in 2, 3, and 4 above is hotter than the air that leaves the rear of the server, around 50℃ and is potentialy more useful and can be used for other processes, such as heat for office areas, or transferred to adjacent buildings for heating swimming pools or greenhouses.
There is a current EU funded project that is looking into heat re-use from data centres (both air and liquid) called Catalyst, more information on their website https://project-catalyst.eu
In the next article, I'm going to look at the relative costs of both ecosystems, the air cooled ecosystem, the non immersed systems and the immersed ecosystems.