Show 126 – Plexxi & Affinity Networking With Marten Terpstra – Sponsored

Boston area networking startup Plexxi parks their tour bus at the Packet Pushers studios for a chat with Ethan Banks and Greg Ferro. Plexxi’s Ethernet switch with optical ring interconnect using WDM makes a highly meshed network possible without requiring core switches. Add Plexxi’s API and controller, and the Plexxi solution allows network architects to build affinities between systems and give them special, flexible treatment across the data center. The approach is genuinely out-of-the-box thinking, and to my mind, an outstanding example what what software defined networking really looks like.

Marten Terpstra, Director of Product Management at Plexxi, brings the nerdery, talking through the Plexxi approach at both a technical and practical level.

What We Discuss

  • The Plexxi hardware solution.
  • Interconnecting Plexxi switches using WDM over an optical ring: the LightRail interface.
  • The concept of “Affinity Networking.”
  • The Plexxi software controller.
  • How a Plexxi network functions without a controller (yes, it still works).
  • Understanding the Plexxi API.
  • How traffic is forwarded through a Plexxi domain.
  • MLAG support and other practical matters.
  • Use cases.


About Ethan Banks

Ethan Banks, CCIE #20655, is a hands-on networking practitioner who has designed, built and maintained networks for higher education, state government, financial institutions, and technology corporations. Ethan is a host of the Packet Pushers Podcast, which has seen over one million unique downloads, and today reaches a global audience of over ten thousand listeners. Also a writer, Ethan covers network engineering and the networking industry for a variety of IT publications. He is also the editor for the independent community of bloggers at Follow @ecbanks.

  • fstein

    Great work PP and Plexxi. Thanks. Seems like dedicating lambda’s would be a great security feature. Please elaborate?

    • Marten Terpstra

      Lambas are used as our dynamic mechanism to create switch to switch connectivity. On top of those topologies, our controller will place traffic based on needs/sensitivies/Affinities. Such an affinity could be what we would term an “isolation” affinity, which would keep traffic defined by this affinity on a separate path from other traffic.

      • fstein

        Thanks. Appreciate getting the details. If I may recommend, take a deeper dive on security in your outbound comms. It seems superficially true that reducing complexity increases security, but there may be much more to learn about security with Plexxi vs alternative Data Center Networking.

        • Marten Terpstra

          Thank you, will take that feedback and look to see if we can create a white paper on the topic.

  • caskings

    If I understand the details correctly, you have put a ROADM into the plexxi switch and the controller manages the configuration of it?

  • Juno Guy

    great intro session…

    The concept of Affinity is very interesting I am however trying to understand whether the controller has some level of knowledge about applications already programmed within such that it knows for example how much BW a given application needs – or – is there still a user element involved that has to already have an understanding of the application and therefore know the necessary BW that it requires and then configures the controller with such constraints such that the controller can then go chose the proper best path and program the switches?

  • Alexei Monastyrnyi

    Thanks for the show. I reckon not all questions were asked though.

    1. infrastructure availability and redundancy. With ring topology from the switch prospective, how does the switch/central controller behave if one leg goes down?
    2. Ring topology vs star topology. I reckon it would be more robust solution if each top-of rack switch was dual-connected to a pair of central passive optics multiplexing system instead of one neighbour to the right and one to the left. Even though it may sound less scalable in terms of number of edge switches which can be connected to the multiplexer, it may provide you with true full mesh and there is more redundancy in it, as only one switch is affected and only in case of both links failure. With the ring topology, up to 5 neighbours from each side may be affected and controller would have to recalculate all the paths. Also if you use DWDM multiplexing technology, you can space up to 160 lambdas and with the star topology it may be up to 150 fully meshed edge switches, give or take.
    3. What are the scalability limits? 100 switches per ring, or 1000?
    4. In case scalability limits are reached, what are the options to build two rings and interconnect them? It seems now that it can only be done via traditional ports.
    5. What is the loop prevention mechanisms for both optimized and residual traffic?
    6. Is central controller managing edge switches via OOB network?
    7. L3 flow management is yet to be implemented. But one should go beyond L3, doing flow management for L4 at least, or else there are serious implementation limitations in regards of application hosting.
    8. Have any test been done with a distributed model like two or more data centres participating in the same ring?

    Hope some of those questions make sense. :-)

  • Brent Salisbury

    Fantastic rundown. All three of you did a killer job articulating the Plexxi architecture. Mind is racing with the possibilities. GMPLS comes to mind.