Connections +

Big change afoot for the data centre

Pending standards from a tia working group address a range of facilities, cabling and network architecture issues.

November 1, 2002  

Print this page

The Telecommunications Industries Association (TIA) TR-42.1.1 working group is developing data center infrastructure standard, which will address facilities, cabling, and network architectures for data centers. This article will primarily address data center pathways and spaces.


Data center spaces defined in the current working documents are shown in Figure 1 (see below) and are described below:

Entrance Room (ER) – includes carrier equipment and carrier demarcation.

Main Cross-Connect (MC) – central point of distribution for the data center structured cabling system.

Intermediate Cross-Connect (IC) – distribution point for horizontal cabling to end equipment areas. A typical data center will have several intermediate cross-connects.

Zone Outlet (ZO) – optional interconnect point for horizontal cabling between intermediate cross-connect (IC) or main cross-connect (MC) and End Equipment Area.

End Equipment Areas (EAs) – computer room spaces allocated for end equipment – including computer systems and communications equipment. These areas do not include the TRs, ERs, MC, or ICs.

Telecommunications Room (TR) – distribution point for cabling to areas outside the computer room space including support offices, operations center, support equipment areas. There may be more than one TR.

Support Areas – spaces outside the computer room for personnel and equipment that support the data center, including operation center, support personnel offices, electrical rooms, mechanical rooms, security rooms, storage rooms, staging rooms, and loading docks.

In smaller data centers, the ER, IC, MC, and possibly the TR may be merged into a single space.


Backbone cabling shall use the hierarchical star topology where each IC is cabled directly to the MC. Backbone cabling includes all cabling between the MC, ERs, ICs, and TRs. There is only one MC.

Additionally, there is only one tier of ICs – all IC’s are directly cabled to the MC. However, direct cabling between ICs and from secondary ERs to the ICs is permitted.

Horizontal cabling includes all cabling from the MC and ICs to the end equipment areas.

The media types being considered for both backbone and horizontal cabling are:

100-Ohm twisted-pair cable, preferably Category 6,

75-ohm (734- and 735-type) coaxial cable for T-3 circuits, preferably 734-type except in small data centers,

50/125 (m multimode optical fiber cable, preferably 850-nm laser-optimized,

Single-mode optical fiber cable.


The entrance room includes carrier equipment, racks for demarcation to carriers, and racks for cabling to the computer room. The room may need plywood if the local carriers plan to install wall-mounted protector blocks for copper-pair entrance cables.

Meet with your local carriers to determine their power and space requirements for the equipment they plan to place in this room.

The ER will need to be in a location that will avoid exceeding maximum cable lengths for circuits. The ER may be adjacent to the MC or combined with it to avoid circuit distance problems. However, the ER may need to be outside the computer room to avoid carrier technicians from entering the computer room space.

Additional ERs may be required in large data centers to permit circuits to be installed in locations that are too far from the primary ER. Secondary ERs may either have dedicated carrier entrance facilities or be subsidiary to the primary ER.

To simplify circuit management and cross-connects, all carriers should be requested to hand-off circuits in the demarcation or meet-me area in the ER, rather than in their own racks. The demarcation area should have separate racks for each type of circuit.

Carriers should be requested to hand-off voice and low-speed circuits on punch down blocks in the low-speed circuit demarcation area. This area will also have punch down blocks for twisted-pair cabling to the computer room.

You can use high-pair count Category 3 cable for low-speed circuit cabling from the ER. Terminations on the carrier punch down blocks are 1- or 2-pair terminations depending on the circuit. Terminations on the punch down blocks for cabling to the computer room are 4-pair terminations. The termination sequence depends on the type of circuit.

Request that carriers hand-off T-1 circuits on patch panels (preferably DS-1 DSX panels) with RJ48X jacks in the DS-1 demarcation area. Patch panels for T-1 cabling to the computer room will also be located here. You should use 2-pair ABAM cabling or 4-pair Category 5 or better UTP cabling from the ER to the computer room. There should be no more than one T-1 circuit on each 4-pair (or higher pair count) cable.

Request that carriers hand-off T-3 circuits on DS-3 DSX panels in the T-3 demarcation area. Patch panels for 75-ohm coaxial cabling to the computer room will also be located in the T-3 Demarcation Area. Use 734-type coaxial cable for T-3 cabling, each circuit will require two cables. Be certain that all connectors are 75-ohm BNC connectors, not 50-ohm connectors used for Ethernet or 93-ohm connectors used for IBM 3270 terminals.

Request that carriers hand-off optical fiber circuits (SONET, MAN, Gigabit Ethernet) on fiber patch panels in the fiber demarcation area. Preferably all the carriers will hand-off circuits using the same type of fiber connector. Patch panels for fiber cabling to the MC will also be located here; you will typically only need single-mode fiber.


The main cross-connect is the central point of distribution for data center structured cabling system. As with the ER, it should be centrally located to avoid exceeding circuit distance restrictions. The MC may also support horizontal cabling to nearby end equipment areas.

The MC should have separate racks for fiber, twisted-pair, and coax distribution. The core switches and routers are also typically located in the MC.


The intermediate cross-connect is the distribution point for horizontal cabling to end equipment. Distribution LAN, SAN, KVM switches and console servers are typically located in the IC

The number of ICs depends on the density of cabling and the size of the data center. The capacity of the cable tray system also creates practical limits on the size of the IC. The current proposal is a maximum of 2,000 twisted-pair modular jacks or BNC connectors per IC.

Arrange patch bays and equipment to minimize patch cable lengths. The IC should have separate patching bays for twisted-pair, coax, and fiber distribution.


Telecommunications cabling is either distributed in overhead cable trays or under the raised floor.

With either raised floors or overhead cable trays, it’s essential to coordinate cabinet and rack locations with the electrical and mechanical engineers. Lighting should be placed between rows, not above the cabinets, racks, or cable trays. Additionally, sprinkler heads will need to be at least 18-inches clearance. Airflow from cooling equipment should be parallel to the equipment rows. Pathways for electrical cabling, ducts, pipes, and telecommunications cabling will need to be coordinated to ensure that they do not conflict.

Raised floors allow higher power density (watts/sq ft), better cooling control, and more flexibility in location of cooling equipment. They also have a better appearance than overhead cabling. Additionally, most floor-standing computer systems, such as mainframes and minicomputers, are designed for cabling from below. When cables are installed under the raised floor, cables should be placed in wire basket trays or other cable tray systems that do not impede airflow.

Overhead cable trays are less expensive than raised floor systems. Cable trays are typically suspended from the ceiling. They may be installed in several layers to provide adequate capacity.

This article is a very abbreviated version of the information that will be included in the upcoming data center infrastructure standard being developed by the TIA TR-42.1.1 working g
roup. The goal is to complete the standard by Decemberr 2004.

Jonathan Jew is the President of J&M Consultants, Inc. – a telecommunications consulting firm that specializes in technology room planning and relocations. Over the last 18 years, he has been the primary designer for 30 major data centers in the United States, Asia, Europe, and Australia. Jonathan is a key contributor to the TIA TR-42.1.1 working group.