Data centers around the world are constantly changing. Why? Evolving needs to process more data, save energy, and reduce costs are becoming more important every day. Faster data processing between powerful servers and complicated switches is in high demand, thanks to the massive proliferation of IoT devices. This is happening locally, inside the intra-connected data center (InDC) and geographically, between interconnected data centers (ExDC). All this is to keep data transmission signaling delays shorter — think virtual gaming, smart cities, real-time alerts, etc.
Small- to mid-sized data centers traditionally use multimode fibers and VCSEL-based transceivers to keep system costs down. Larger data centers require techniques, such as WDM, PAM-4, parallel optics (multi-fiber) transmission, BiDi, and other more sophisticated electronics and signal processing, to provide more and more bandwidth. The architecture and structure of the switch-to-server connections is changing from slower, more congested, three-tier connections to faster, spine-leaf links. But, this is often at the expense of requiring many more fibers and cross connections between active equipment.
The trend in InDC, for the small- to mid-sized data centers, is for more connections over multimode links. But having to deal with fiber modal dispersion or differential mode delay (DMD) can limit transmission distance as well as bandwidth. As multimode links increase in length and connections become more complex, DMD and latency further impacts the quality of service. Channel insertion loss becomes more of an issue to assure acceptable signal bandwidth integrity and bit-error rate (BER) performance. On a global scale, large-scale to hyperscale data centers are appearing to consolidate the data processing power in more localized areas. Architects for the InDC links are considering single-mode fiber and connectivity in the installed cable plant to avoid bandwidth obsolescence and the costly need to pull out legacy multimode cables only to have to replace them with single-mode cables in the future.
Traditional single-mode links that use power-hungry, edge-emitting laser transmitters with ample coupled power and channel insertion loss budgets will continue to be used for the long haul and ExDC links. However, power-conserving (relaxed specification), single-mode laser transceivers are being considered as sustainable solutions for InDC links. Much of the time, InDC connections don’t need to run up to 2 km or more. As a result, these shorter, single-mode links have reduced signal power budgets and channel insertion losses compared to the longer-haul, single-mode transceivers. Also, the single-mode fiber core reduces the coupled optical power from the transmitter to the receiver compared to multimode fiber. The trade-off is that the electronics in these relaxed specification transceivers can be designed with lower laser drive currents and case operating temperatures that can offer higher energy savings and overall power efficiencies. Data center network designers and managers have many concerns facing them. The data center architect, owner, or operator must consider how future data rates and technology changes will affect their current installed network infrastructure as well as the new builds that have been planned down the road. With the increase in next-generation transmission speeds, bandwidth, and overall communications capabilities comes the concerns over associated constraints and considerations. Advanced technologies, such as WDM, PAM-4, coherent transmission, and digital signal processing, require better signal-to-noise ratio (SNR) with lower optical power budgets. System predictability, rapid deployment, dynamic topology generation, and lower power consumption are also key considerations.
One important consideration is to allow for single-mode optical loss budgets that meet the maximum channel insertion losses defined by the standards when installing optical networks with transmission speeds migrating from 10 gigabits per second to 40, 100, 200 and 400 Gbps and beyond. Optical loss budgets and channel insertion losses are standards-driven through the Institute of Electrical and Electronics Engineers (IEEE) and the Telecommunications Industry Association (TIA) and often require lower losses for the installed passive optical network components depending on the types of active components employed.
As mentioned earlier, single-mode links have traditionally been used for long haul networks or InDC center applications. The standards allow solutions for shorter-distance, single-mode inDC links as well. These solutions can provide the data center designer with cost-effective flexibility and connectivity options when using single-mode fibers with tremendous fiber bandwidth as well as the ability to avoid installed fiber cabling obsolescence. In addition, the performance specifications of the transceiver optics and electronics can be “relaxed” compared to single-mode transceivers designed for long-range (LR) and extended-range (ER) applications, thereby reducing heat generation and electrical power demands.
Adding passive devices, such as splitters, taps, switches, and connectors, into the link can create the need for additional optical power from the transmitter at the other end of the link. This assures the necessary minimum received optical power to maintain the required level of system signal-to-noise ratio performance. A higher signal-to-noise ratio significantly improves overall system performance while maintaining the desired transmission speed. Signaling formats and techniques employed for today’ s optics — such as WDM, PAM4 and QAM — as well as forward error correction (FEC) and digital signal processing, are making it possible to boost data rates that ensure an adequate eye-opening performance but often at the cost of more complicated optics, packaging, electronics, and stringent SNR in the system designs.
To address these demanding requirements, the market has begun to introduce ultralow-loss, single-mode systems like the Infinium Quantum, with significantly improved 0.75dB end-to-end channel connection loss. These new systems are redefining performance in the data center, with low total system loss at nearly 70% improvement over standard systems, opening the opportunity to challenge the limits of what was previously impossible, providing ideal solutions for AI, hyperscale, cloud and supercomputing, and other high-bandwidth demand environments.
Installing a low-loss, pre-terminated optical fiber solution increases optical signal power for a given end-to-end fiber run, providing more headroom with an associated increase in SNR. An improvement in link SNR and BER performance leads to less Ethernet packet retransmissions, which means reduced latency and power consumption. It also offers flexibility to install additional links — supporting higher-speed systems, additional passive components, and cross-connections — or eliminating the need for fusion splicing to obtain lower link losses. The cost savings and reduced downtime make it possible to meet the demands of high-speed migration from 40G to 100G and, eventually, 400G networks and beyond. The ability to utilize lower power laser devices saves reduces not only optical transceiver device power but also the amount of cooling required to maintain recommended temperatures. and conditioning necessary to mitigate excessive electronics heat generation or the application of bulky heat sinks while also improving laser component reliability.