What’s new in the EUCOC 2023 14th Edition?

What’s “new in the EU Code of Conduct for Data Centres (Energy Efficiency) 14th (2023) Edition

The 14th Edition of the EU Code of Conduct for Data Centres (Energy Efficiency) has now been published

As many data centre sector observers and operators will know, the EU Taxonomy regulations cite the EU Code of Conduct for Data Centres (Energy Efficiency) (EUCOC) as a series of best practices that data centres should follow, alternative methods include the CLC/TR EN50600-99-1.

In late 2022, the TIC council was tasked with redrafting the EUCOC to provide absolute guidance as to the implementation of the EUCOC in a format that is consistent with the terminology of certification to an ISO accreditation and Carbon3IT Ltd MD John Booth assisted with this task, compiling revisions to 4 sections of the EUCOC. The entire redraft was completed in December 2022 BUT has yet to published. However, we understand that the redraft will be published as an Appendix to the EUCOC Best Practice Document 14th Edition 2023 Guidelines.

In the meantime, there have been some changes to the EUCOC itself as part of the annual update and revision cycle, and these are listed below.

Note: Carbon3IT Ltd will provide further updates and links to the EU Taxonomy EUCOC redraft documents as they are published.

Note: Any editorial changes will be listed with the original in normal typeface and the amendment in Italics

3.2.3 Service Charging Models –

Original Text - Co-location and Managed Service providers should employ charging models and tariffs that encourage the use of best practice and improve energy efficiency. Enterprise operators should ensure that the true cost of data centre services are understood and reported

2023 Edit - Co-location and Managed Service providers should employ charging models and tariffs that encourage the use of best practice and improve energy efficiency. Enterprise operators should ensure that the true cost of data centre services are fully understood and properly reported

3.2.8 Sustainable energy usage – Value changed from 1 to 3.

4.1.2 New IT hardware – Restricted (legacy) operating temperature and humidity range

Original Text - If no equipment can be procured which meets the operating temperature and humidity range of Practice 4.1.3 (ASHRAE Class A2), then equipment supporting (at a minimum), the restricted (legacy) range of 15°C to 32°C inlet temperature and humidity from –12°C DP and 8% rh to 27°C DP and 80% rh may be procured. This range is defined as the ASHRAE Allowable range for Class A1 class equipment. Class A1 equipment is typically defined as Enterprise class servers (including mainframes) and storage products such as tape devices and libraries. To support the restrictive range of operation equipment should be installed in a separate area of the data floor in order to facilitate the segregation of equipment requiring tighter environmental controls as described in Practices 5.1.11, 5.1.4 and 5.1.5. In unusual cases where older technology equipment must be procured due to compatibility and application validation requirements (an example being air traffic control systems), these systems should be considered as subset of this Practice and installed so as not to restrict the operation of other equipment described above. A summary of ASHRAE environmental guidelines can be found at: https://www.ashrae.org/File%20Library/Technical%20Resources/Publication%20Errata%20and%20Updates/90577_errata.pdf

2023 Edit - Equipment should be purchased that allows for operation within ASHRAE Class A2. If no equipment can be procured which meets the operating temperature and humidity range of Practice 4.1.3 (ASHRAE Class A2), then equipment supporting ASHRAE Class A1 class equipment. at a minimum may be procured.

To support the restrictive range of operation equipment should be installed in a separate area of the data floor in order to facilitate the segregation of equipment requiring tighter environmental controls as described in Practices 5.1.11, 5.3.4 and 5.3.5.

In unusual cases where older technology equipment must be procured due to compatibility and application validation requirements (an example being air traffic control systems), these systems should be considered as subset of this Practice and installed so as not to restrict the operation of other equipment described above.

A summary of ASHRAE environmental guidelines can be found at: https://www.ashrae.org/file%20library/technical%20resources/bookstore/supplemental%20files/referencecard_2021thermalguidelines.pdf

4.1.3 New IT Hardware – Expected operating temperature and humidity range

Original Text – Include the operating temperature and humidity ranges at the air intake of new equipment as high priority decision factors in the tender process. Equipment should be able to withstand and be within warranty for the full range of 10°C to 35°C inlet temperature and humidity –12°C DP and 8% rh to 27°C DP and 80% rh. This is defined by the ASHRAE Class A2 allowable temperature and humidity range. Vendors are required to publish (not make available on request) any restriction to the operating hours within this range for any model or range which restricts warranty to less than continuous operation within the allowable range. To address equipment types which cannot be procured to meet this specification exclusions and mitigation measures are provided in Practices 4.1.2 for new IT equipment, 5.1.11 for existing data centres and 5.1.4 for new build data centres. Directly liquid cooled IT devices are addressed in Practice 4.1.14. A summary of ASHRAE environmental guidelines can be found at: https://www.ashrae.org/File%20Library/Technical%20Resources/Publication%20Errata%20and%20Updates/90577_errata.pdf

2023 Edit - Include the operating temperature and humidity ranges at the air intake of new equipment as high priority decision factors in the tender process.

Equipment should be able to operate and be within warranty for the full ASHRAE Class A2 allowable temperature and humidity range.

Vendors are required to publish (not simply make available on request) any restriction to the operating hours within this range for any model or range which restricts warranty to less than continuous operation within the allowable range.

To address equipment types which cannot be procured to meet this specification exclusions and mitigation measures are provided in Practices 4.1.2 for new IT equipment, 5.1.11 for existing data centres and 5.3.4 for new build data centres. Directly liquid cooled IT devices are addressed in Practice 4.1.14.

A summary of ASHRAE environmental guidelines can be found at: https://www.ashrae.org/file%20library/technical%20resources/bookstore/supplemental%20files/referencecard_2021thermalguidelines.pdf

4.1.4 New IT Hardware – Extended operating temperature and humidity range

Original Text – Include the operating temperature and humidity ranges at the air intake of new equipment as high priority decision factors in the tender process.

Consider equipment which operates under a wider range of intake temperature and humidity such as that defined in ASHRAE Class A4 (broadly equivalent to ETSI EN 300 019–1-3 Class 3.1).

This extended range allows operators to eliminate the capital cost of providing mechanical cooling capability in some hotter climate regions.

Note: Many vendors provide equipment whose intake temperature and humidity ranges exceed the minimum sets represented by the described classes in one or more parameters. Operators should request the actual supported range from their vendor(s) and determine whether this presents an opportunity for additional energy or cost savings through extending the operating temperature or humidity range in all or part of their data centre.

A summary of ASHRAE environmental guidelines can be found at: https://www.ashrae.org/File%20Library/Technical%20Resources/Publication%20Errata%20and%20Updates/90577_errata.pdf

2023 Edit - Include the operating temperature and humidity ranges at the air intake of new equipment as high priority decision factors in the tender process.

Consider equipment which operates under a wider range of intake temperature and humidity such as that defined in ASHRAE Class A4 (broadly equivalent to ETSI EN 300 019–1-3 Class 3.1).

This extended range allows operators to eliminate the capital cost of providing mechanical cooling capability in some hotter climate regions.

Note: Many vendors provide equipment whose intake temperature and humidity ranges exceed the minimum sets represented by the described classes in one or more parameters. Operators should request the actual supported range from their vendor(s) and determine whether this presents an opportunity for additional energy or cost savings through extending the operating temperature or humidity range in all or part of their data centre.

A summary of ASHRAE environmental guidelines can be found at: https://www.ashrae.org/file%20library/technical%20resources/bookstore/supplemental%20files/referencecard_2021thermalguidelines.pdf

 https://www.ashrae.org/File%20Library/Technical%20Resources/Publication%20Errata%20and%20Updates/90577_errata.pdf

4.1.6 IT equipment power usage against inlet temperature

Original Text - When selecting new IT equipment require the vendor to supply at minimum the total system power for a range of temperatures covering the full allowable inlet temperature range for the equipment at 100% load on a specified recognised benchmark such as Linpack, SERT (http://www.spec.org/sert/) or SPECPower http://www.spec.org/power_ssj2008/). Data should be provided for 5°C or smaller steps of inlet temperature. As a minimum comply with the EU Eco Design Directive and Lot 9 amendments to EU Commission regulation for servers and online storage systems. Considered optional but recommended; Total system power covering the full allowable inlet temperature range under 0% and 50% load on the selected benchmark. These sets of data shown easily in a single table and single chart will allow a data centre operator to select equipment to meet their chosen operating temperature range without significant increase in power consumption. This Practice is intended to improve the thermal performance of IT equipment by allowing operators to avoid devices with compromised cooling designs and creating a market pressure toward devices which operate equally well at increased intake temperature. Consider referencing and using the current U.S. EPA ENERGY STAR specifications for Servers. Consider referencing and using the current U.S. EPA ENERGY STAR specifications for Data Center Storage.

2023 Edit - When selecting new IT equipment require the vendor to supply at minimum the total system power for a range of temperatures covering the full allowable inlet temperature range for the equipment at 100% load on a specified recognised benchmark such as Linpack, SERT (http://www.spec.org/sert/) or SPECPower http://www.spec.org/power_ssj2008/).

Data should be provided for 5°C or smaller steps of inlet temperature.

As a minimum comply with the EU Eco Design Directive and Lot 9 amendments to EU Commission regulation for servers and online storage systems.

It is also recommended that:

Total system power covering the full allowable inlet temperature range under 0% and 50% load on the selected benchmark.

These sets of data are shown easily in a single table and single chart will allow a data centre operator to select equipment to meet their chosen operating temperature range without significant increase in power consumption.

This Practice is intended to improve the thermal performance of IT equipment by allowing operators to avoid devices with compromised cooling designs and creating a market pressure toward devices which operate equally well at increased intake temperature.

Consider referencing and using the current U.S. EPA ENERGY STAR specifications for Servers.

Consider referencing and using the current U.S. EPA ENERGY STAR specifications for Data Center Storage.

4.1.11 Energy & temperature reporting hardware

Original Text - Select equipment with power and inlet temperature reporting capabilities, preferably reporting energy used as a counter in addition to power as a gauge. Where applicable, industry standard reporting approaches should be used such as IPMI, DMTF Redfish and SMASH. To assist in the implementation of temperature and energy monitoring across a broad range of data centres all devices with an IP interface should support one of;

SNMP polling of inlet temperature and power draw. Note that event-based SNMP traps and SNMP configuration are not required

IPMI polling of inlet temperature and power draw (subject to inlet temperature being included as per IPMI or Redfish)

An interface protocol which the operators’ existing monitoring platform is able to retrieve inlet temperature and power draw data from without the purchase of additional licenses from the equipment vendor

The intent of this Practice is to provide energy and environmental monitoring of the data centre through normal equipment churn.

2023 Edit - Select equipment with power and inlet temperature reporting capabilities, preferably reporting energy used as a counter in addition to power as a gauge. Where applicable, industry standard reporting approaches should be used such as IPMI, DMTF Redfish and SMASH.

To assist in the implementation of temperature and energy monitoring across a broad range of data centres all devices with an IP interface should support one of the following:

  • SNMP polling of inlet temperature and power draw. Note that event-based SNMP traps and SNMP configuration are not required
  • IPMI polling of inlet temperature and power draw (subject to inlet temperature being included as per IPMI or Redfish)
  • An interface protocol which the operators’ existing monitoring platform is able to retrieve inlet temperature and power draw data from without the purchase of additional licenses from the equipment vendor

The intent of this Practice is to provide energy and environmental monitoring of the data centre through normal equipment churn.

4.1.13 When forced to use, select free standing equipment suitable for the data centre airflow direction.

Original Text – If no alternative is available select equipment which is free standing or supplied in custom cabinets so that the air flow direction of the enclosures match the airflow design in that area of the data centre. This is commonly front to rear or front to top.

Specifically the equipment should match the hot / cold aisle layout or containment scheme implemented in the facility. Equipment with non-standard air flow can compromise the air flow management of the data centre and restrict the ability to raise temperature set points. It is possible to mitigate this compromise by segregating such equipment according to Practices 5.1.11, 5.1.4 and 5.1.5 Note: Try to avoid free standing equipment as it usually does not allow a well organised airflow through the data centre especially if the major part of the room is equipped with well organised IT equipment mounted in cabinets.

2023 Edit - If no alternative is available then select equipment which is free standing or supplied in custom cabinets so that the air flow direction of the enclosures match the airflow design in that area of the data centre. This is commonly front to rear or front to top.

Specifically, the equipment should match the hot / cold aisle layout or containment scheme implemented in the facility.

Equipment with non-standard air flow can compromise the air flow management of the data centre and restrict the ability to raise temperature set points. It is possible to mitigate this compromise by segregating such equipment according to Practices 5.1.11, 5.3.4 and 5.3.5

Note: Try to avoid free standing equipment as it usually does not allow a well organised airflow through the data centre especially if the major part of the room is equipped with well organised IT equipment mounted in cabinets.

*4.2.9 Network Energy Use - New Best Practice*

When purchasing new cloud services or assessing a cloud strategy, assess the impact on network equipment usage and the potential increase or decrease in energy consumption with the aim of being to inform purchasing decisions.

The minimum scope should include elements inside the data centre only.

The ambition is to include overall energy consumption and energy efficiency including that related to multiple site operation and the network energy use between those sites.

Moved from Section 10 - Practices to become minimum expected.

4.3.7 Control of system energy use

Original Text - Consider resource management systems capable of analysing and optimising where, when and how IT workloads are executed and their consequent energy use. This may include technologies that allow remote deployment or delayed execution of jobs or the movement of jobs within the infrastructure to enable shutdown of components, entire systems or sub-systems. The desired outcome is to provide the ability to limit localised heat output or constrain system power draw to a fixed limit, at a data centre, row, cabinet or sub-DC level.

2023 Edit - Consider resource management systems capable of analysing and optimising where, when and how IT workloads are executed and their consequent energy use. This may include technologies that allow remote deployment or delayed execution of jobs or the movement of jobs within the infrastructure to enable shutdown of components, entire systems or sub-systems. The desired outcome is to provide the ability to limit localised heat output or constrain system power draw to a fixed limit, at a data centre, row or cabinet level.

5.1.6 Provide adequate free area on cabinet doors

Value changed from 3 to 4

5.2.7 Effective regular maintenance of cooling plant

Original Text - Implement effective regular maintenance of the cooling system in order to conserve or achieve a “like new condition” is essential to maintain the designed cooling efficiency of the data centre. Examples include: belt tension, condenser coil fouling (water or air side), evaporator fouling etc. This includes regular filter changes to maintain air quality and reduce friction losses along with the routine monitoring of air quality and a regular technical cleaning regime (including under-floor areas if applicable).

2023 Edit - Implement effective regular maintenance of the cooling system in order to conserve or achieve a “like new condition” is essential to maintain the designed cooling efficiency of the data centre.

Examples include the following: belt tension, condenser coil fouling (water or air side), evaporator fouling etc.

This includes regular filter changes to maintain air quality and reduce friction losses along with the routine monitoring of air quality and a regular technical cleaning regime (including under-floor areas if applicable).

5.3.1 Review, and if possible, raise target IT equipment intake air temperature.

Original Text - Data Centres should be designed and operated at their highest efficiency to deliver intake air to the IT equipment within the temperature range of 10°C to 35°C (50°F to 95°F). The current, relevant standard is the ASHRAE Class A2 allowable range for Data

Centres. Operations in this range enable energy savings by reducing or eliminating overcooling. Note: Some data centres may contain equipment with legacy environmental ranges as defined in 4.1.2, the maximum temperature for these facilities will be restricted by this equipment until segregation can be achieved as described in Practices 5.1.11, 5.1.4 and 5.1.5. Note: Additional Best Practices for airflow management as defined in section 5.1 may need to be implemented at the same time to ensure successful operations. Note: Some, particularly older, IT equipment may exhibit significant increases in fan power consumption as intake temperature is increased. Validate that your IT equipment will not consume more energy than is saved in the cooling system. A summary of ASHRAE environmental guidelines can be found at: https://www.ashrae.org/File%20Library/Technical%20Resources/Publication%20Errata%20and%20Updates/90577_errata.pdf

2023 Edit - Data Centres should be designed and operated at their highest efficiency to deliver intake air to the IT equipment within the ASHRAE Class A2 allowable range for Data Centres. Operations in this range enable energy savings by reducing or eliminating overcooling.

Note: Some data centres may contain equipment with legacy environmental ranges as defined in 4.1.2, the maximum temperature for these facilities will be restricted by this equipment until segregation can be achieved as described in Practices 5.1.11, 5.3.4 and 5.3.5.

Note: Additional Best Practices for airflow management as defined in section 5.1 may need to be implemented at the same time to ensure successful operations.

Note: Some, particularly older, IT equipment may exhibit significant increases in fan power consumption as intake temperature is increased. Validate that your IT equipment will not consume more energy than is saved in the cooling system.

A summary of ASHRAE environmental guidelines can be found at: https://www.ashrae.org/file%20library/technical%20resources/bookstore/supplemental%20files/referencecard_2021thermalguidelines.pdf

5.3.2 Review and widen the working humidity range

Original Text - Reduce the lower humidity set point(s) of the data centre within the ASHRAE Class A2 range (–12°C DP and 8% rh to 27°C DP and 80% rh) to reduce the demand for humidification. Review and if practical increase the upper humidity set point(s) of the data floor within the current A2 humidity range of (–12°C DP and 8% rh to 27°C DP and 80% rh) to decrease the dehumidification loads within the facility. The relevant standard is the ASHRAE Class A2 allowable range for Data Centers. Note: Some data centres may contain equipment with legacy environmental ranges as defined in 4.1.2, the humidity range for these facilities will be restricted by this equipment until segregation can be achieved as described in Practices 5.1.11, 5.1.4 and 5.1.5. Controlling humidity within a wider range of humidity ratio or relative humidity can reduce humidification and dehumidification loads and therefore energy consumption. A summary of ASHRAE environmental guidelines can be found at: https://www.ashrae.org/File%20Library/Technical%20Resources/Publication%20Errata%20and%20Updates/90577_errata.pdf

2023 Edit - Reduce the lower humidity set point(s) of the data centre within the ASHRAE Class A2 range to reduce the demand for humidification.

Review and if practical increase the upper humidity set point(s) of the data floor within the current A2 humidity range to decrease the dehumidification loads within the facility.

The relevant standard is the ASHRAE Class A2 allowable range for Data Centers.

Note: Some data centres may contain equipment with legacy environmental ranges as defined in 4.1.2, the humidity range for these facilities will be restricted by this equipment until segregation can be achieved as described in Practices 5.1.11, 5.3.4 and 5.3.5.

Controlling humidity within a wider range of humidity ratio or relative humidity can reduce humidification and dehumidification loads and therefore energy consumption.

A summary of ASHRAE environmental guidelines can be found at: https://www.ashrae.org/file%20library/technical%20resources/bookstore/supplemental%20files/referencecard_2021thermalguidelines.pdf

 

 

5.4.2.4 Variable speed drives for compressors, pumps and fans

Value changed from 2 to 3

8.2.1 Locate the Data Centre where waste heat can be reused

Value changed from 3 to 4

8.2.2 Locate the Data Centre in an area of low ambient temperature

Value changed from 3 to 4

 




Leave a Reply