Five thousand FO ports in one cabinet, how can you hope to keep track of everything? Data center cabling is reaching new dimensions. This world can no longer be managed by hand, using a laptop and tables.
On the road to 400 Gigabit Ethernet, data centers are making the most of every millimeter. In the computer room, power has to be consolidated. Spine-leaf architectures are gaining ground. Racks need to accommodate more channels, fibers and ports than ever before. Cables with 1,782 or 3,456 fibers are establishing themselves in data centers. The market is now demanding distributors in the Ultra-High Density class. This means that, ideally, more than 5,000 ports can be packed into a rack. In most cases, this potential is not fully exploited. Nevertheless, the question remains: How should data center operators manage the mass of cables, fibers and ports? How can they cope with the increasing complexity of networks with fewer and fewer staff?
What does experience say?
Five everyday experiences underline the drama of these questions:
- Traditionally, laptops, tables and labels are cost effective tools. This is how the physical network layer, documentation, service and MACs are organized in data centers. Experience shows that errors creep in when things are done by hand. People can forget or misunderstand something. The data becomes obsolete within a few days. The worst consequences: outages, patch errors, malfunctions on customer premises, faulty audits, breaches of compliance. It is no longer economically justifiable to manage networks and connectivity by hand.
- If someone overstretches a fiber while managing cables in a packed cabinet, attenuation can increase. If dust settles on the contacting surfaces, there are consequences then, too. Individually, there is a small loss of performance in each case. But put them together and the latency of 5G cells, traffic control centers, factories, Internet services and AI applications could increase.
- How can such performance losses be precisely measured and located with thousands of fibers in a single rack? Packed to the brim with equipment, a long way away from the operator, no technician on site this is the situation at edge data centers.
- How can connectivity be managed and controlled outside at the edge? A study by the Uptime Institute suggests that data center operators could prevent about 75 % of known outages. They would have to improve management, tools and processes and exclude typical homemade errors in cable management and network documentation.
- In aviation, such negligence would have catastrophic consequences. So far, it has been tolerated in data centers. Even at the top quality level – in TIER IV data centers – an availability of 99.995% is considered standard. This corresponds to a downtime tolerance of 25 minutes a year.
Need for a paradigm shift
It is time for a paradigm shift in the management of the physical network layer. People talk about zero-defect production when data centers are to meet the security needs of the digital decade. And the market already offers a lot of what is needed for this – Data Center Infrastructure Management (DCIM) and Automated Infrastructure Management (AIM) solutions. DCIM is available in all shapes, sizes and complexity. DCIM machines can be used to manage buildings, access, electricity, UPS, temperature, fire protection, resources, assets, cabling and much more. There are all-powerful and modular offerings. None offers unconditional transparency. There are stand-alone and SaaS products. Even artificial intelligence is available. Very few people would first think about the network connector. But this small, non-virtualizable part may decide on the availability of an entire data center or a global network.
There is even a dependency relationship. DCIM only works if the right patch cords are inserted in the right port. What happens if critical information does not immediately appear on the operator’s DCIM dashboard? There is certainly at least one redundant LAN connection, but it does not change the realization that availability starts with the connector. There can be no high security in the data center without 100% end-to-end control of the passive network layer, whether enterprise, colocation or edge data center.
Further criteria to consider in an evaluation:
- Expertise: The vendor is well versed in the technology and provides mature AIM products, know-how and a roadmap.
- Standards: The hardware fits into every rack. It supports future network generations.
- Customizing: The AIM can be expanded to include functions such as MAC planning, asset management.
- Openness: Software and products from partners, IoT devices, etc. can be docked on as required.
- Integration: The AIM is easily scalable and capable of up-down integration, e.g. into a higher-level DCIM.
- Security: There are protection and security solutions, firewalls, authentication, etc. for all levels.
- Operation: The user interface can be operated intuitively. The software is simple and localizable.
- Training: Webinars, explanatory films and short learning units quickly familiarize those responsible with the system.
- Compliance: The infrastructure management must meet compliance, SLA, QMS, data protection and documentation requirements.
Investing step by step
The bottom-up approach is also of interest to the CIO as the company can invest gradually. The CIO will first include the essential AIM components in the budget. Using a professional roadmap, the CIO will head towards a comprehensive DCIM that will hopefully not be too overloaded and complex. The project starts in the meet-me room or in cross-connect cabinets. Further racks will gradually be connected with the smart AIM system. Each step makes the data center more secure, more efficient and more competitive. Future cabling dimensions, security requirements and operational challenges require zero-defect production on layer 1. Smart technology for connectors, ports and monitoring relieves them of inefficient, expensive manual labor in high-density networks. Using a bottom-up approach, data center operators can align connectivity management with the DCIM future.