Connections +
Feature

Delving into TIA-942

This is the first standard that specifically addresses data centre infrastructure. More than half deals with facility requirements.


July 1, 2006  


Print this page

The TIA’s Telecommunications Infrastructure Standard for Data Centres is a comprehensive document that includes a lot of information on all the aspects involved in the design and provisioning of a data centre, including facilities design, network design, and cabling design.

Over half the document, which is entitled TIA-942, deals with the facilities design elements. For this month’s article I wanted to focus on some of the more important considerations in the design process.

Physical environment: A data centre contains a huge concentration of electronic equipment that is crucial to running the applications that handle the core business and operational data of an organization.

The physical environment must be strictly controlled to assure the integrity and functionality of its hosted computer environment.

Air conditioning is used to keep the room around 17 degrees Celsius (63 degrees Fahrenheit). This is crucial since electronic equipment in a confined space tends to malfunction if not adequately cooled.

Backup power is provided via one or more uninterruptible power supplies and/or diesel generators. To prevent single points of failure, the electrical systems are typically fully duplicated, and critical servers are connected to both the “A-side” and “B-side” power feeds

Physical security also plays a large role with data centres. Physical access is usually restricted to selected personnel. Video surveillance and permanent security guards are almost always present if the data centre contains sensitive information.

One of the things that impressed me the most is the thought and planning that goes into the design of the data centre facility.

The steps in the design process include estimating the space, power, cooling, security, floor loading, grounding, electrical protection, and other facility requirements running at full capacity and anticipating any future telecommunications, power, and cooling trends over the lifetime of the data centre.

Cooling capacity: The cooling capacity is something that is becoming more and more important because of the high concentration of powerful servers and switching equipment.

Research conducted by Jennifer Mitchell-Jackson at the University of California at Berkeley in 2000, showed that data centres have computer rooms that use an average of 50 watts per square foot or less.

Some recent discussions I have had with consultants involved in the design of modern data centres indicated that this number could be as high as 300 watts per square foot. This may necessitate some novel techniques for air handling and cooling.

Equipment is placed in cabinets and racks with “cold” air intake at the front of the cabinet or rack, and “hot” air exhaust out the back creating an alternate pattern of hot and cold aisles.

For high heat loads, forced airflow is required to provide adequate cooling for all the equipment in the cabinet. A forced airflow system utilizes a combination of properly placed vents in addition to the cooling fan systems.

Pathway design and provisioning: Another important consideration is pathway design and sizing to accommodate the combination of high density of equipment and the larger diameter cables to support 10 Gb/s transmission.

For example, vertical cable management on the side of a rack that is 12 inches wide and 9.5 inches deep can accommodate about 1,400 Category 5e cables, about 1,000 Category 6 cables or 700 Category 6A cables.

These pathways need to be sized to accommodate the total number of switch ports and patch panel connections.

For underfloor distribution, it is recommended wire basket cable trays be placed under the hot aisles for telecom cabling, whereas power cabling are usually placed under the cold aisles.

This maintains a natural separation between power and telecommunications cables. Alternatively, an overhead layer cable tray system may be used where the bottom layer holds copper cables, the middle layer holds fiber cables and the top layer holds power cables.

These trays are attached to the signal reference grid, which provides a common ground point for all equipment, racks and cabinets.

The signal reference grid consists of a copper conductor grid on 0.6 to 3 m (2 to 10 ft) centres that covers the entire computer room space.

The TIA-942 is the first standard that specifically addresses data centre infrastructure. More than half of the content deals with facility requirements.

It provides a flexible and manageable structured cabling system that builds on existing standards, where applicable and also provides guidelines on a wide range of subjects that is invaluable to someone designing or managing a data centre.

Paul Kish is Director, Systems & Standards at Belden CDT. He is a key contributor in the development of cabling standards with TIA and has served as Chair of TR 41.8 and Vice Chair of TR 42 Engineering Committee.

Disclaimer: The information presented is the author’s view and is not official TIA correspondence.