In recent years two buzz words began to arise: open-networking and white box switches. Those two words go often hand-in-hand with each other. They are often promoted by big names like Facebook or Microsoft.
From the software side, SONiC is maybe the biggest player out there as it powers Microsoft Azure’s cloud, while from the hardware side, Accton has arguably been one of the most important vendors.
The truth though, at least in my opinion, is that while this innovation is great it is not ready to be embraced by everyone yet. Only companies willing to make this “leap of faith” can take advantage of all of this, but what about us poor mortals? Are SONiC and white boxes ready to be widely deployed? Well let’s give it a look!
We will be deploying a simple VXLAN-EVPN Fabric like in the picture below and we will be checking how difficult is to configure and troubleshoot the fabric, but also and most importantly if this common Enterprise design actually works.

The Hardware
For our spines we’ll be using Edge-Core’s AS7816-64X, powered by Broadcom’s Tomahawk II chipset. This switch is a 2RU lean spine providing 64x 40/100 Gbps QSF28 ports.
For the leafs, we’ll be using Edge-Core’s AS7326-56X, powered by Broadcom’s Trident III chipset. This switch is a 1RU TOR providing 48x 1/10/25 Gbps SFP28 and 8x 40/100 Gbps QSFP28 ports.
The Software
As for the software, we will be focusing on SONiC version 3.0.1.
This version introduces support to VXLAN-EVPN among many other things, that in my opinion, makes it ready for a more wide spread distribution.
The Architecture
Looking at SONiC’s features, we will try to implement the architecture below.
Some choices though, like the usage of a Virtual-VTEP as opposed of EVPN Multi-homing or ingress-replication for BUM traffic, are dictated only by SONiC support of such configurations.
SPINE/LEAF POD
│
├── ENDPOINT ACCESS
│ │
│ └── MCLAG with Virtual VTEP (all NLRI advertised with VIP as NH)
├── UNDERLAY
│ │
│ ├── Routing
│ │ │
│ │ └── OSPF
│ └── EVPN NLRI exchange
│ │
│ ├── iBGP
│ └── Route reflection
│ │
│ └── Spine, Fabric
└── OVERLAY
│
├── L3 Gateway placement
│ │
│ └── Leafs
├── Distributed Anycast Gateway
│ │
│ └── Same IP-MAC
├── Service Interface
│ │
│ └── VLAN aware
└── Host communication
│
├── BUM traffic forwarding
│ │
│ └── Ingress Replication (EVPN Type 3)
├── Suppress ARP
│
└── Symmetric IRB
│
├── Inter-Subnet
│ │
│ └── L3 VNI
└── Intra-Subnet
│
└── L2 VNI
I won’t explain why I’ve chosen OSPF+iBGP, this is a discussion for another time. It suffices to say that there is no reason to reinvent the wheel as this design worked perfectly for decades in the much more complex MPLS Service Provider’s space.
In short…
In this first post, I wanted to appeal to your curiosity, and set expectation right.
Accton switches powered by Boradcom chipset will be our white box switches while SONiC is our open source operating system.
In the next one instead, we will be implementing the above design looking at SONiC CLI and we’ll try to make it work.
Spoiler alert… It works but… well… details are a lot more interesting…
how to configure vrrp for L3 routing in peer switches.
need to configure a HA with two switches which has Sonic OS
LikeLike
Thanks for your article. did you install SONiC bianry file from Jenkins ? that builds from SONiC github or not ?
LikeLike
Hi,
No I used a pre-compiled binary. This was provided for testing by Broadcom
LikeLike
Hi
how can I found that ? I used Jenkins bin file of SONiC in github project, but I haven’t some features that you use in the part 2 and part 3 !
LikeLike
Pingback: SONiC and White Box switches in the Enterprise DC! – Part 3 | Andrea Florio's Networking Blog
Pingback: SONiC and White Box switches in the Enterprise DC! – Part 2 | Andrea Florio's Networking Blog
Did you run Sonic on top of ONL with ONLP?
LikeLike
I’ve installed SONiC using ONIE as a pre-compiled binary. So fortunately I didn’t need to go trough source code, SDKs or complicated stuff
LikeLike