Just a few years ago, most data centers were running 1 Gb/s and 10 Gb/s, which seemed incredibly fast. Today, 10 Gb/s is the standard and 40 Gb/s and 100 Gb/s are on the horizon, but coming fast. What is driving these changes? Perhaps the most pressing driver is the expectation to access business and entertainment services that require high data rates from anywhere and at any time.

And there is a lot of data. Internet use, tweets, security, IoT, hyper-convergence, software-defined infrastructure — these are just some of the industry trends driving increases in data. Estimates put annual data generation at 35 trillion TB of data yearly by 2020.

At the hyper-data center level we’re now seeing more than 100,000 servers per data center, so data center managers are looking at every layer and all elements to see what can be done to lower operational costs. These decisions will trickle down and influence the traditional enterprise data center, too.

Currently, duplex multimode fiber powered by vertical cavity surface emitting lasers (VCSELs) is ubiquitous in the data center because it offers a low cost, reliable solution for 10 Gb/s speeds — which has been the workhorse data rate for the past decade. Consequently, the physical layout of the data centers has been dictated by the IEEE 10GBASE-SR standard. Because this specification calls for maximum link distances of 300m, when using OM3, or 400m, when using OM4, it has even influenced the size of the buildings.

But this landscape is changing. Data centers are quickly migrating toward 40 Gb/s and 100 Gb/s and this seismic shift is causing data center managers to think hard about their migration strategy because these data rates go beyond what has been possible with duplex multimode fiber.

Until recently, migrating to a higher data rate required tradeoffs between transceiver costs and fiber costs. There were several options:

  • Duplex single-mode fiber running at higher serial data rates: Single-mode fiber costs less than multimode fiber, but the transceiver modules cost more because single-mode fiber requires more precise alignment and cannot be used with lower cost VCSELs.

  • Wave division multiplexing (WDM) over duplex single-mode fiber: WDM multiplexes multiple wavelengths onto a single pair of fibers. This solution uses the fewest fibers but the modules cost more because they must mix together different wavelengths — different colors of light — onto one strand of fiber. When there are very long cable spans and the fiber cost becomes significant, this can be cost-effective. But for shorter reaches, like those found in data centers, the module cost dominates and the solution is expensive.

  • Parallel multimode fiber: Changing from duplex to parallel transmission over multimode fiber allows the user to upgrade to 40GBASE-SR4 by taking four streams of 10 Gb/s each and putting them on parallel fibers. This solution uses four times the amount of fiber (to get four times the bandwidth) but still uses low cost VCSELs. However, it also requires high-density MPO connectors, which have twice the loss of LC connectors, because all eight or 12 fibers must be simultaneously aligned.

  • Now a fourth alternative is available: short wavelength division multiplexing (SWDM) over duplex multimode fiber: This approach applies WDM to multimode fibers at short wavelengths (near 850 nm). It is the most cost-effective solution because it combines VCSEL transceivers with a solution that uses just two fibers. It also allows data center managers to leverage their installed base of duplex OM3 or OM4 fiber without requiring a forklift upgrade to the fiber plant.

In this article we will look at the transceiver modules, the switches, and the new wide band multimode fiber that is making SWDM a very attractive new option.



SWDM transceiver modules, which enable WDM capabilities using VCSELs, were introduced in late 2015 and are now commercially available from a range of suppliers. One example is the 40G QSFP+ SWDM4 transceiver.

SWDM4, which is the emerging solution, uses four wavelengths. Most people use the QSFP+ form factor for this, powered by VCSELs. Each module has four electrical signals coming in on the left side. These go to four separate lasers, each transmitting at a slightly different, but precisely defined, wavelength. The wavelengths are multiplexed together inside the module and transmitted out of one LC port so that all four wavelengths travel together on a single fiber. At the other end, those four signals are de-multiplexed inside the module, separately detected and then delivered to the host on four separate electrical signals.

The 40 Gb/s SWDM4 QSFP+ transceivers have four lanes of 10.3 Gb/s (so that traditional test equipment can be used) operating in the wavelength range from 850-940 nm. Power dissipations are typically quite reasonable, ranging from 1.5 to 2.5 watts. At 40 Gb/s you can achieve the same kind of reach that you had at 10GBASE-SR (300m on OM3 and 400m on OM4) and there are built-in digital diagnostics functions including Tx and Rx power.

A 100G SWDM4 module has four lanes of 25.7 Gb/s operating  in the wavelength range from 850-940 nm. Because it uses traditional NRZ modulation it is readable by standard 25G test equipment. It uses a standard CAUI-4 electrical interface and fits into the same slot as a standard 100G QSFP28 SR4 module. Power dissipations can still be kept under 3.5 Watts, which is where most people want to stay below when they put in 100G QSFP28 modules. Full digital diagnostics are also built in. The reach is the same as 100GBASE-SR4 — 70m on OM3 and 100m on OM4.



For “brownfield,” data centers running 10 Gb/s with installed OM3 or OM4 fiber, SWDM is an attractive option because it allows data center managers to leverage their existing fiber plant. In contrast, using SR4 would require four times the amount of fiber and the installation of ribbon fiber to the transceivers. And LR4 and CWDM4 would require the installation of single-mode fiber.

For “greenfield” installations, many data center managers prefer to stay with lower cost VCSEL transceivers and use a duplex multimode fiber solution to avoid using MPO connectors (except in trunks), due to loss and reliability concerns. To get the full advantage of SWDM technology in a new data center without a pre-existing fiber plant, you can install the new OM5 wide band multimode fiber (WBMMF) which is optimized for SWDM because it allows wavelengths up to 950nm to propagate further — demonstrations have reached up to 500m at 40G and 450m at 100G.



In concert with the innovation in transceivers, the availability of a multi-rate fabric switch platform that can be tuned from 1 to 100 Gb/s switching speeds will provide much greater flexibility and control in high-density environments.

This allows data center networks to take advantage of a range of switching options for high-performance computing speeds. Some of the trends include:

  • Disaggregation of hardware and software at the switch level. Disaggregation at the server level has been going on for many years. When users are not locked into a vendor’s operating system (OS) and can upgrade with no hardware swap out, they can take advantage of alternative solutions for fabrics, SDNs, and managements (OpEx and DevOps). This is true future proofing as users can choose one OS today and another when their needs change, all while using the same hardware.

  • Switching ASIC capacity trends. This is important because it drives the limit of how much users can put in a 1U space — either a 1U blade or a line card in a larger aggregation chassis, or maybe just a 1U top-of-rack switch — as well as influencing the total cost. At the end of 2015, solutions emerged with 32 ports of 100G in a 1U box, all powered by a single ASIC. Looking toward the future, as the ASICs become available it makes it possible to generate a switch that has high density 100G, 200G, and 400G ports which will eventually bring costs down.

  • Switching silicon capacity is doubling every 24 to 36 months. In 2013-2014 it became possible to put 32 40G ports in a 1U box with a single ASIC. That was a significant step because it made 40G affordable. Prior to that, it was possible to buy a 32 40G port box, but internally there was a fabric of ASICs and that meant more latency, more power, and more cost.

10G is still the primary driver in the industry for servers. But 25 is coming on extremely strong and 100G is within sight. As new switches become available that have higher capacities at lower price points it makes it easier to step up to the next level of bandwidth. The SWDM duplex solution has the most potential to drive significant change and adoption of 400G to really become affordable for the mass market.

The introduction of multi-rate switches means that it’s now possible with a switch that has 32 ports of 100G to run them concurrently at 100G or to break out each port into multiple different speeds, whether it’s 4x1G, 4x10G, 4x25G, 2x50G, or 1x100G. It is no longer necessary to dedicate a 100G port to 100G only. Users can now take advantage of 100G ports to run at lesser speeds right up until they are ready to phase in full 100G capabilities.

These multi-rate switches are built to support future-ready data center applications requiring a range of options in switching rates in high density environments and giving customers flexibility over the life of the switch.



The final piece of the SWDM solution is the multimode fiber itself. Up to this point, we’ve been able to increase transmission speeds over fiber by reducing the amount of time allocated to each bit transmitted, which translates into a faster and faster number of bits per second: up to 28 Gb/s in serial transmission fashion over standardized multimode fiber. As we have come close to the transmission capability limit with standard, non-return-to-zero (NRZ) signaling protocols — where a single bit is represented by a single symbol, so one bit per symbol — the industry is moving to a format called PAM-4. This doubles the number of bits per symbol, delivering two bits per symbol instead of just one. Consequently, this will double the data rate without requiring twice the baud rate, enabling 50 Gb/s per lane.

And, of course, we are using wavelength division multiplexing. If you put four channels on a single fiber, each one carried by a different color — wavelength — of light, each fiber carries four times more information. As we mentioned earlier, SWDM technology can be used with legacy OM3 and OM4 fiber to bring data centers from 10 Gb/s to 40 Gb/s or 100 Gb/s. The only tradeoff has been in the cable lengths that can be supported as conventional OM3 and OM4 fibers are bandwidth-limited at wavelengths beyond 850nm.

In response to the emerging SWDM technology, fiber manufacturers tuned the manufacture of their multimode fiber to work over a broader spectrum. The resulting fiber is wide band multimode fiber (WBMMF), now dubbed OM5 by ISO/IEC. The goals for this new fiber were to deliver sufficient bandwidth over the full wavelength spectrum used by SWDM, to support at least 100 Gb/s — and to reach at least 100m.

Achieving this level of performance retains support for all the legacy applications operating at 850nm to the level of OM4, increases the capacity to greater than 100G per fiber, and reduces the number of parallel fibers required by a factor of four. If WBMMF is deployed in parallel, it will also boost the array cabling capacity for those parallel applications.

WBMMF enables new generations of 40G, 100G, 200G, 400G Ethernet, and the up-and-coming fibre channel speeds — 128G and 256G fibre channel. Effectively it increases the utility of multimode fiber as a universal communications medium in the data center.

Looking at the Application Evolution Roadmap, the advantages of SWDM are clear. When you use 10 Gb/s lanes, you can support 40 Gb/s with four lanes and 100 Gb/s with 10 lanes. Evolving those lanes up to 25 Gb/s each allows 100 Gb/s to be delivered through four lanes instead of 10 and using 16 lanes you can deliver 400 Gb/s.

The industry is now standardizing 50 Gb/s lanes using PAM-4. That would allow the 100 Gb/s solution to drop down to two pairs of fibers and enable 200 Gb/s with four pairs. It will halve the number of pairs needed for 400 Gb/s down to eight pairs. Taking this one step further, with four wavelengths you can reduce 40 Gb/s and 100 Gb/s to a single pair, 200 Gb/s will also emerge in duplex form, and 400 Gb/s will be delivered in just two pairs.

Note: with wavelength multiplexing, you don’t break the wavelengths out for individual channels. If you’re using these optics to gain access to high-density single lane ports — in other words, if you install a 40Gb/s port to get access to four 10Gb/s ports — you will still need parallel fiber, so it is important to decide what application you want to support. Of course, you can always put in wideband fiber in parallel, and get both the SWDM benefits as well as the breakout functionality.

The TR-42.11 Engineering Committee began work on a standard for wide band multimode fiber in 2014 and ANSI/TIA-492AAAE was published in June 2016. The wavelength range defined for wideband multimode fiber is critical to its performance. It was important that WBMMF support 850nm because that’s the wavelength of all the popular legacy applications, and also that the fiber support longer wavelengths to gain the benefits from the lower chromatic dispersion and attenuation inherent to the glass, and faster VCSEL performance.

Transceiver vendors said they needed to have at least 30nm of space between each of the wavelengths to support low-cost manufacturing tolerances, temperature variation, spectral width, and low-complexity filters.

TIA-42.11 put these requirements into the specification, and the group determined the shortest wavelength and the wavelength range. It also determined the required fiber bandwidth across the spectrum by applying Ethernet and fibre channel transmission models.

To understand the resulting WBMMF, it’s helpful to look at the two types of dispersion that limit the bandwidth performance in multimode fiber: modal dispersion and chromatic dispersion.

Figure 3 looks at bandwidth properties in an OM4 Fiber. Modal bandwidth (the dashed blue line) peaks at 850nm at 4,700 MHz·km. This is the worst-case envelope for OM4. Chromatic bandwidth (the red dashed line), increases with longer wavelengths because the chromatic dispersion is smaller at longer wavelengths. The fiber’s total bandwidth (solid purple line) shows the combination of these two elements. There are benefits to operating towards the longer, or the right side, of the 850nm wavelength that is in the middle of the chart. With wideband fiber we needed to raise the total bandwidth by some combination of improved modal and chromatic bandwidths, so that the purple line going to the right is above the dashed flat black line. This tuning of the fiber means that wideband fiber will ensure considerably higher bandwidth for these longer wavelengths than can OM4.

Figures 4a and 4b show the performance differences between OM4 fiber and WBMMF. Compare the left plot at 850nm to the right plot at 980nm. The red lines for the two OM4 fibers move significantly up and to the right, indicating that transmission impairments have substantially increased at 980nm for OM4. The two WBMMFs plotted in green remain comparatively similar at 980nm to their 850nm performance showing their ability to well support a very useful range of wavelengths. Compared to the 850 nm optimized OM4 fiber, there is only a small penalty — less than 1 dB — for using the wideband fiber at 850nm, which is quite acceptable.

As mentioned earlier, the standard for wide band multimode fiber was approved in June. The fiber’s performance has been demonstrated for some time, showing the system is quite robust. At OFC 2015 WBMMF was deployed with a non-integrated transceiver at 100 Gb/s with each wavelength carrying 25  Gb/s over a wideband fiber channel consisting of three separate lengths of fiber, totaling 225m. This ran error free without the assistance of forward error correction (FEC), which is a standard assistance that is defined by all the standards at 25G and above. If forward error correction had been enabled, the distance would have been even longer. Later in 2015, using an integrated transceiver from Finisar, WBMMF was able to support 40 Gb/s to 500m and 100G to 300m, again without FEC.

In summary, the industry is clearly moving to use shortwave multiplexing with the new VCSEL SWDM transceivers and multi-rate switches. By optimizing the performance with wideband multimode fiber, while still retaining support for legacy applications at OM4 capability, we have the opportunity to seed these technologies as Ethernet and fibre channel continues to evolve. This combination of technologies will continue the legacy of delivering lowest-cost optical solutions over the universal medium that is multimode fiber.