Proposed ANSI/TIA-942-A standard rewrites the book on everything from copper and fiber cabling to cabinets.
May 1, 2011
There is a new revision of the ANSI/TIA-942-A data centre standard under development by the TIA TR 42.1 subcommittee. ANSI/TIA-942-A is harmonized with the recently published ANSI/TIA-568-C series standards including the topology and the environmental classifications. The new revision also incorporates the addenda ANSI/TIA-942-1 and ANSI/TIA-942-2.
The major changes in the TIA-942-A standard are:
• Removal of the 100 metre length limitation for optical fiber horizontal cabling and replacement with distances that are based on individual application requirements.
• Category 3 and Category 5e cabling are no longer recognized for horizontal cabling of data centres. The minimum category of performance for balanced twisted-pair cabling is category 6, with Category 6A recommended.
• OM1 and OM2 optical fiber cable are no longer recognized. Recognized multimode optical fiber cable for horizontal and backbone cabling has been changed to OM3 and OM4 (850 nm 50/125 mm) laser-optimized multimode fiber cable.
• The recognized optical fiber connectors are LC for one or two fibers and MPO for more than two optical fibers.
The new revision of ANSI/TIA-942-A includes an expanded topology to accommodate large data centres such as facilities located on multiple floors or in multiple rooms.
These large data centres can include a Main Distribution Area (MDA), an Intermediate Distribution Area (IDA) and a Horizontal Distribution Area (HDA) and may include an optional Zone Distribution Area (ZDA).
ANSI/TIA-942-A also includes a reduced data centre topology that consolidates the MDA and HDA in a single MDA, possibly as small as a single cabinet or rack.
The most notable addition to the ANSI/TIA-942-A data centre standard is a new section on energy efficient design.
Data centres consume large amounts of energy, most of which is converted to heat. Serious consideration needs to be given to thermal management and cooling efficiencies.
It is estimated that cooling accounts for up to 30% of a data centre’s total energy load. What is more, the cooling process itself is not very efficient, largely due to an oversupply of cold air to the data centre by computer room air conditioner (CRAC) units that are attempting to compensate for inefficiencies in the enclosure cooling process. A substantial amount of the cold air from the CRACs never makes it to the enclosure. Some of the hot air being exhausted to the rear of the enclosure by server fans and leaks to the front of the servers instead of going back to the CRAC return.
A recent study conducted on 19 large computer rooms found that, on average, the amount of cold air supplied to a data centre room is 2.6 times the amount of cold air actually consumed by the IT load. Furthermore, the study found that even though the rooms are oversupplied with cold air by almost a factor of 3, an average of 10% of enclosures still experience air intake temperatures exceeding ASHRAE maximum reliability guidelines.
In order to address these concerns, TIA-942-A recommends various techniques that can be used to improve cooling efficiency for enclosures and enclosure systems, for example:
• cabinets with isolated air-supply
• cabinets with isolated air-return
• cabinets with in-cabinet cooling systems
• hot-aisle containment or cold-aisle containment systems
• cable openings in the enclosure or enclosure system should use brushes or grommets to minimize loss of air
• blanking panels should be used in unused rack unit positions in equipment cabinets to avoid mixing of hot and cold air
• Unused cabinet/rack positions in equipment rows should be filled with a cabinet/rack or otherwise sealed to prevent mixing of air in hot- and cold-aisles
It is recommended to use a cooling system that is able to vary the volume of air as needed. Equipment should match the airflow design for the enclosures and computer room space in which they are placed. Equipment with non-standard airflow may require specially designed enclosures to avoid disruption of proper air flow.
It is also recommended that cabinets and racks are provided with power strips that permit monitoring of power levels to ensure that power levels in enclosures do not exceed designed power and cooling levels. In addition, SNMP enabled smart power strips along with the appropriate management software can provide the capability to record and analyze industry efficiency metrics such as PUE/DCiE. These metrics are part of the Green Data Center initiative and are just as important as measuring network performance.
The revised ANSI/TIA-942-A standard recognizes that there is no single thermal management architecture that is most energy efficient for all installations. Critical factors unique to the customer, application and environment should be carefully evaluated to ensure optimum performance and efficiency.
It is anticipated that the document should be approved for publication later this year, depending on the outcome of the balloting process.
Paul Kish is Director, Systems and Standards at Belden. The information presented is the author’s view and is not official TIA correspondence.