Video: Finisar Demonstrates 400G CFP8 Transceivers at OFC 2017

Finisar 400G Ethernet Technology Video at ECOC 2015

OSDN –Optical Software Defined Networking: What does it mean for Optics folks?

Buzz is always a good thing, especially when it promotes a new technology concept. Remember when you signed up for every conference or seminar that had the words, “Cloud” or “Data Center” in it? Now the new word (actually abbreviation) is “SDN”! Okay, so I just made up the term from the blog title OSDN -not to be confused with ISDN…remember that? Let’s see how quickly the search engines pick up on that. Seriously, it all started with OpenFlow which became known in a generalized term, Software Defined Networking (SDN). This concept is quite powerful in the data center to optimize traffic flow on the IP layer, especially when dealing with large data sets. The question for us optics folks is how does SDN apply to us?

Years and years ago, I remember the rise of GMPLS (Generalized Multi-Protocol Label Switching), a standard in which an optical switching layer would be controlled by an intelligent management/control layer. Sounds familiar? I think it is very clear that the SDN concept can be applied to the optical layer and many others agree, based on the multiple press releases I have read on this topic:
ADVA Tries OpenFlow
Optical Transport Gets an SDN Idea
Coriant Boasts ‘SDN-Plus’ System

I recently asked my colleague, Craig Thompson, Director of Strategic Marketing at Finisar, why SDN is not the same as GMPLS. Here is a snapshot of our conversation:

EUGENE: Craig, is SDN–especially when applied to the optical layer–just GMPLS reincarnated?

CRAIG: As you allude to, in principal the concept of SDN–specifically the separation of data and control planes–is not new. But it’s the (almost)-standard application of separate control and data planes in both packet and optical circuit switch equipment that is the most exciting aspect of SDN. The telecom market has attempted to embrace ‘standard’ ways of switching large data flows across multiple networks, and multiple vendor equipment for many years, with mixed success. GMPLS was largely confined to a ‘protocol’ and left the actual setup and control of the circuits to vendors, and the network operators they worked closely with.

EUGENE: So, SDN goes further because those who manage data centers are now given essentially an open-standards based toolkit, so to speak.

CRAIG: Absolutely. Fast forward to today and the proliferation of mega-datacenters, IP/Ethernet services in the WAN and even enterprise-owned and managed WANs has driven investment in ‘Software Defined Networks’ to address a much larger opportunity. The stakes are so much higher to unlock the tight integration of vendor equipment and operating system that exists in layer-2/3 packet switches and routers, and the incumbent players know it. SDN is not just about giving the network administrator more and better access to program his/her network down to the individual flow. SDN is about throwing out the old model of proprietary software on top of proprietary hardware, effectively separating the two, making the software more “standard” (at least in basic rules, commands, operation) and giving the network administrator much more freedom. The billion dollar question will be whether this leads to either more commoditized systems hardware, or alternatively, even more specialized hardware to differentiate one switching solution from another.

EUGENE: Thanks for your very insightful comments.

There’s definitely excitement around this concept of OSDN. We are nowhere near the general use of optical packet switching (where OSDN would become extremely important) but there is a role of SDN as it applies to the optical layer when large pipes (400G+) needs some kind of intelligent routing mechanism especially on the transport layers between data center sites. And if SDN is already implemented in the IP layer, it doesn’t take much effort to extend that to the optical layer.

I’m interested in hearing any thoughts our readers may have on this topic.

Finisar WSS WHITE PAPER: Balancing Performance, Flexibility, and Scalability in Optical Networks

The availability of Wavelength Selective Switches (WSS) supporting 100 Gb/s and 400 Gb/s data rates enables network operators to significantly increase bandwidth capacity in DWDM optical networks with substantial CAPEX and OPEX savings. Moving to such higher data rates, however, requires a shift from the continuing trend of implementing narrower optical channel spacing given that data rates beyond 100 Gb/s cannot fit within a 50 GHz channel….

Download Finisar’s latest WSS white paper from our website (see blue downloads box): Balancing Performance, Flexibility, and Scalability in Optical Networks

ECOC Tradeshow Recap and the Importance of Flexgrid™

Finisar Flexgrid Technology Logo

Just got back from the ECOC tradeshow, or as I joked at our customer event on Tuesday night, the ECDC trade show – “European Colorless Directionless Contentionless” Conference.

Despite losing my suitcase and having to rush-out to buy some of the latest Italian fashions (can you say slim fit?), it was a good industry show.

Finisar was no stranger to the ROADM architecture theme with our new Flexgrid™ technology demo. Flexgrid is a WSS software feature we believe will be critically important to carriers in their deployments of ROADMs in the future.

As described in our recent press release, Flexgrid™ WSS technology enables dynamic control of channel center frequency and channel bandwidth within a WSS, from 50 GHz to 200 GHz in 12.5 GHz steps, with no penalty on any aspect of WSS performance. Flexgrid™ draws upon the inherent flexibility and performance of Finisar’s Liquid Crystal on Silicon (LCoS) optical engine which we believe will address carrier demands for flexible bandwidth-capable ROADMs in next-generation networks. LCoS technology enables a WSS to be very flexible with many features including the ability to optimize (or contour) the channel shape of each individual wavelength and provide configurable dispersion compensation for best transmission performance.

Flexgrid becomes very interesting when one looks to the next Ethernet data rate as the following example shows.

If you assume the next data rate beyond 100G will be 400G, then:

Obviously 400G will require an advanced modulation format, so if you assume:
• Modulators and electronics are limited to 30G symbols/s
• 16 QAM give 4 bits/symbol
• Two polarizations x two wavelengths gives another factor of four
• FEC takes away 20%
• 30x4x2x2= 480Gb/s. Take away 20% = 400Gb/s Ethernet

Based on this 16-QAM, 30GHz, dual polarization modulation format:
• 60GHz is needed for the signal
• Add 10GHz channel boundaries on each side of the signal
• Allow 3-5GHz between signals (including laser drift)

This means the bandwidth required to transport a 400GE signal is somewhere around 85GHz.
One way to transport this would be to use a 100GHz grid since 85GHz would fit within the 100GHz channel. However, moving to 100GHz channel spacing to accommodate the 400G channel means that the carrying efficiency of the fibre actually drops for other (lower bandwidth) signals which would normally be carried on 50GHz channels. Since signal heterogeneity is likely to be a feature of future networks, and carriers are looking to maximize fiber carrying capacity (and hence minimize cost/Mbit/km), then it is clearly not an option to just return to a 100GHz grid.

Furthermore, the actives associated with transporting 400GE (say using a 16-QAM dual polarization modulation format) are likely to be expensive. Thus, we will spend a very significant amount on optics to increase the throughput on a given fiber in a way in which spectral bandwidth is not optimized. However, if a network operator has already deployed a ROADM using Flexgrid™ technology, they could preset that specific wavelength to 87.5GHz granularity using a very simple software command. Voila, you have a spectrally efficient 400GE wavelength that increases the spectral efficiency of fiber by a factor of 2.3 (130% increase as opposed to 100% increase using 100 GHz channels) as well as providing efficient bandwidth allocation for other types of traffic and hence maximizing the carrying capacity of a given fibre route. This should ultimately also make the cost associated with upgrading to 400GE significantly less expensive for a carrier.

People ask why should we worry about deploying ROADMs for 400GE? As an industry, we are just starting to deliver 100GE. Well, as Glenn Wellbrock at Verizon stated “we like to deploy our ROADM equipment for 10 years”. If you assume that 400GE will likely start to ship in the next 5 years (pretty likely considering that the standard efforts for 100GE started in 2006 and now 4 years later, we are shipping 100GE), then not deploying Flexgrid would be very short-sighted. It would mean that in 5 years time, a carrier would have to rip out old ROADMs to support 400GE – a very expensive proposition.

Any comments are welcome!