As discussed during our part 1, we are trying to configure a VXLAN-EVPN fabric using SONiC on white box switches in order to determine if Open Networking is ready to be deployed in most enterprise DCs.
As a small Recap, below is the topology we are trying to bring online:

Familiarise with the OS
The most interesting thing of SONiC is its architecture!
I’ll write a blog just about it because it’s a fascinating topic, but in short, every single process is living inside a dedicated container.
Linux SONIC-Leaf301 4.9.0-11-2-amd64 #1 SMP Debian 4.9.189-3+deb9u2 (2019-11-11) x86_64 You are on ____ ___ _ _ _ ____ / ___| / _ \| \ | (_)/ ___| \___ \| | | | \| | | | ___) || |\ | | |___ |____/ \___/|_| \_|_|\____| -- Software for Open Networking in the Cloud -- Unauthorized access and/or use are prohibited. All access and/or use are subject to monitoring. Help: http://azure.github.io/SONiC/ Last login: Thu Apr 20 12:52:21 2017 from 192.168.0.31 admin@SONIC-Leaf301:~$ show version SONiC Software Version: SONiC-OS-3.0.1-Enterprise_Advanced Product: Enterprise Advanced SONiC OS - Powered by Broadcom Distribution: Debian 9.12 Kernel: 4.9.0-11-2-amd64 Build commit: aff85dcf1 Build date: Mon Apr 20 09:48:32 UTC 2020 Built by: sonicbld@sonic-lvn-csg-004 Platform: x86_64-accton_as7326_56x-r0 HwSKU: Accton-AS7326-56X ASIC: broadcom Serial Number: 732656X1916020 Uptime: 13:04:04 up 12 days, 29 min, 1 user, load average: 2.14, 2.46, 2.42 Docker images: REPOSITORY TAG IMAGE ID SIZE docker-sonic-telemetry 3.0.1-Enterprise_Advanced 15838a6dd8b6 397MB docker-sonic-telemetry latest 15838a6dd8b6 397MB docker-sonic-mgmt-framework 3.0.1-Enterprise_Advanced c30542f39b9c 445MB docker-sonic-mgmt-framework latest c30542f39b9c 445MB docker-swss-brcm-ent-advanced 3.0.1-Enterprise_Advanced 7db851611618 338MB docker-swss-brcm-ent-advanced latest 7db851611618 338MB docker-broadview 3.0.1-Enterprise_Advanced 547641c0c886 330MB docker-broadview latest 547641c0c886 330MB docker-nat 3.0.1-Enterprise_Advanced b819906b8bb6 320MB docker-nat latest b819906b8bb6 320MB docker-vrrp 3.0.1-Enterprise_Advanced 2f0615d57ea4 333MB docker-vrrp latest 2f0615d57ea4 333MB docker-teamd 3.0.1-Enterprise_Advanced 3487700dc8d2 318MB docker-teamd latest 3487700dc8d2 318MB docker-fpm-frr 3.0.1-Enterprise_Advanced bf6e7649147e 367MB docker-fpm-frr latest bf6e7649147e 367MB docker-iccpd 3.0.1-Enterprise_Advanced 1c24858c993b 320MB docker-iccpd latest 1c24858c993b 320MB docker-l2mcd 3.0.1-Enterprise_Advanced b0f6db69227b 319MB docker-l2mcd latest b0f6db69227b 319MB docker-stp 3.0.1-Enterprise_Advanced c812baaadda5 316MB docker-stp latest c812baaadda5 316MB docker-udld 3.0.1-Enterprise_Advanced 66c2afbe849a 316MB docker-udld latest 66c2afbe849a 316MB docker-sflow 3.0.1-Enterprise_Advanced 9cf4e8a00ff9 318MB docker-sflow latest 9cf4e8a00ff9 318MB docker-dhcp-relay 3.0.1-Enterprise_Advanced 5217cd436c40 326MB docker-dhcp-relay latest 5217cd436c40 326MB docker-syncd-brcm-ent-advanced 3.0.1-Enterprise_Advanced 800a3fc3af8b 439MB docker-syncd-brcm-ent-advanced latest 800a3fc3af8b 439MB docker-lldp-sv2 3.0.1-Enterprise_Advanced 3a2e52d444f9 309MB docker-lldp-sv2 latest 3a2e52d444f9 309MB docker-snmp-sv2 3.0.1-Enterprise_Advanced d5a8e1d0ba7d 342MB docker-snmp-sv2 latest d5a8e1d0ba7d 342MB docker-tam 3.0.1-Enterprise_Advanced 272eabe18352 361MB docker-tam latest 272eabe18352 361MB docker-pde 3.0.1-Enterprise_Advanced 6ff2567c42b8 495MB docker-pde latest 6ff2567c42b8 495MB docker-platform-monitor 3.0.1-Enterprise_Advanced 0b22d6abcd9a 367MB docker-platform-monitor latest 0b22d6abcd9a 367MB docker-router-advertiser 3.0.1-Enterprise_Advanced 9d201b15eae3 288MB docker-router-advertiser latest 9d201b15eae3 288MB docker-database 3.0.1-Enterprise_Advanced fb46e0661772 288MB docker-database latest fb46e0661772 288MB
Below is the list of interfaces on my leaf. Notice how the naming of such interfaces can be confusing, specifically for the ones that can be channelised (like 40/100Gbps interfaces which support breakout). The primary channel is used as the interface number like with interface Ethernet48. In case an interface is then broken out, the other channels will be listed as Ethernet49, 50 and 51, making the next physical interface Ethernet52.
Interface Aliases are really interesting, unfortunately they currently act more like a description and even switching to “alias” as the default interface name, such alias is used in very few places making it pretty much useless as of now.
admin@SONIC-Leaf301:~$ show interfaces status Interface Lanes Speed MTU Alias Vlan Oper Admin Type Asym PFC ----------- --------------- ------- ----- ---------------- ------ ------ ------- --------------- ---------- Ethernet0 3 25G 9100 twentyFiveGigE1 routed down down SFP/SFP+/SFP28 N/A Ethernet1 2 25G 9100 twentyFiveGigE2 routed down down SFP/SFP+/SFP28 N/A Ethernet2 4 25G 9100 twentyFiveGigE3 routed down down N/A N/A Ethernet3 8 25G 9100 twentyFiveGigE4 routed down down N/A N/A Ethernet4 7 25G 9100 twentyFiveGigE5 routed down down N/A N/A Ethernet5 1 25G 9100 twentyFiveGigE6 routed down down N/A N/A Ethernet6 5 25G 9100 twentyFiveGigE7 routed down down N/A N/A Ethernet7 16 25G 9100 twentyFiveGigE8 routed down down N/A N/A Ethernet8 6 25G 9100 twentyFiveGigE9 routed down down N/A N/A Ethernet9 14 25G 9100 twentyFiveGigE10 routed down down SFP/SFP+/SFP28 N/A Ethernet10 13 25G 9100 twentyFiveGigE11 routed down down N/A N/A Ethernet11 15 25G 9100 twentyFiveGigE12 routed down down N/A N/A Ethernet12 23 25G 9100 twentyFiveGigE13 routed down down N/A N/A Ethernet13 22 25G 9100 twentyFiveGigE14 routed down down N/A N/A Ethernet14 24 25G 9100 twentyFiveGigE15 routed down down N/A N/A Ethernet15 32 25G 9100 twentyFiveGigE16 routed down down N/A N/A Ethernet16 31 25G 9100 twentyFiveGigE17 routed down down N/A N/A Ethernet17 21 25G 9100 twentyFiveGigE18 routed down down N/A N/A Ethernet18 29 25G 9100 twentyFiveGigE19 routed down down N/A N/A Ethernet19 36 25G 9100 twentyFiveGigE20 routed down down N/A N/A Ethernet20 30 25G 9100 twentyFiveGigE21 routed down down N/A N/A Ethernet21 34 25G 9100 twentyFiveGigE22 routed down down N/A N/A Ethernet22 33 25G 9100 twentyFiveGigE23 routed down down N/A N/A Ethernet23 35 25G 9100 twentyFiveGigE24 routed down down N/A N/A Ethernet24 43 25G 9100 twentyFiveGigE25 routed down down N/A N/A Ethernet25 42 25G 9100 twentyFiveGigE26 routed down down N/A N/A Ethernet26 44 25G 9100 twentyFiveGigE27 routed down down N/A N/A Ethernet27 52 25G 9100 twentyFiveGigE28 routed down down N/A N/A Ethernet28 51 25G 9100 twentyFiveGigE29 routed down down N/A N/A Ethernet29 41 25G 9100 twentyFiveGigE30 routed down down N/A N/A Ethernet30 49 25G 9100 twentyFiveGigE31 routed down down N/A N/A Ethernet31 60 25G 9100 twentyFiveGigE32 routed down down N/A N/A Ethernet32 50 25G 9100 twentyFiveGigE33 routed down down N/A N/A Ethernet33 58 25G 9100 twentyFiveGigE34 routed down down N/A N/A Ethernet34 57 25G 9100 twentyFiveGigE35 routed down down N/A N/A Ethernet35 59 25G 9100 twentyFiveGigE36 routed down down N/A N/A Ethernet36 62 25G 9100 twentyFiveGigE37 routed down down N/A N/A Ethernet37 63 25G 9100 twentyFiveGigE38 routed down down N/A N/A Ethernet38 64 25G 9100 twentyFiveGigE39 routed down down N/A N/A Ethernet39 65 25G 9100 twentyFiveGigE40 routed down down N/A N/A Ethernet40 66 25G 9100 twentyFiveGigE41 routed down down N/A N/A Ethernet41 61 25G 9100 twentyFiveGigE42 routed down down N/A N/A Ethernet42 68 25G 9100 twentyFiveGigE43 routed down down N/A N/A Ethernet43 69 25G 9100 twentyFiveGigE44 routed down down N/A N/A Ethernet44 67 25G 9100 twentyFiveGigE45 routed down down N/A N/A Ethernet45 71 25G 9100 twentyFiveGigE46 routed down down N/A N/A Ethernet46 72 25G 9100 twentyFiveGigE47 routed down down N/A N/A Ethernet47 70 25G 9100 twentyFiveGigE48 routed down down N/A N/A Ethernet48 77,78,79,80 100G 9100 hundredGigE49 routed down down QSFP28 or later N/A Ethernet52 85,86,87,88 100G 9100 hundredGigE50 routed down down QSFP28 or later N/A Ethernet56 93,94,95,96 100G 9100 hundredGigE51 routed down down N/A N/A Ethernet60 97,98,99,100 100G 9100 hundredGigE52 routed down down N/A N/A Ethernet64 105,106,107,108 100G 9100 hundredGigE53 routed down down N/A N/A Ethernet68 113,114,115,116 100G 9100 hundredGigE54 routed down down N/A N/A Ethernet72 121,122,123,124 100G 9100 hundredGigE55 routed down down QSFP28 or later N/A Ethernet76 125,126,127,128 100G 9100 hundredGigE56 routed down down QSFP28 or later N/A Ethernet80 129 10G 9100 mgmtTenGigE57 routed down down N/A N/A Ethernet81 128 10G 9100 mgmtTenGigE58 routed down down N/A N/
Configuring the Underlay Routing
I’m a big fan of automation and configuration simplicity. I strongly believe that if i can automate with “notepad” using blind copy/paste i have good templates for fancier automation. For this reason i really think that the use of unnumbered interfaces are a great way to configure spine/leaf interfaces.
First step then, is to configure all fabric interfaces with a proper MTU and ip unnumbered as follows in the example below. Please notice that this post isn’t meant to be a full configuration tutorial.
config loopback add Loopback0 config interface ip add Loopback0 10.0.0.1/32 config interface ip unnumbered add Ethernet120 Loopback0 config interface mtu Ethernet120 9216 config interface startup Ethernet120 ... Repeat for all interfaces facing a leaf ... config save -y
A leaf switch would be configured exactly the same way, but I need also to add a second loopback interface to be used as VTEP source interface. As this loopback will act as an MC-LAG Anycast IP, both leafs in MC-LAG will have the same exact IP on their loopback 0
config loopback add Loopback0 config interface ip add Loopback0 10.0.0.11/32 config loopback add Loopback1 config interface ip add Loopback1 11.11.11.111/32 config interface ip unnumbered add Ethernet72 Loopback0 config interface mtu Ethernet72 9216 config interface description Ethernet72 "LINK_TO_SPINE_1" config interface startup Ethernet72 config interface ip unnumbered add Ethernet76 Loopback0 config interface mtu Ethernet76 9216 config interface description Ethernet76 "LINK_TO_SPINE_2" config interface startup Ethernet76 config save -y
At this point we need to configure OSPF between leafs and spines.
Unfortunately, advanced routing configs can only be applied inside the FRR container, so we need to switch to the FRR shell using the command “vtysh” first. From there on, really there is almost no difference with the well known cisco-like cli.
The biggest downside of this lack of integration is that FRR config needs to be saved separately from the rest of SONiC’s config, and we also need to tell SONiC to look at the routing config in a different place from the rest. To do that, we also need to apply the “config routing_config_mode split” command and most importantly, you need to reboot the box as the warning message will tell you. Failure to do so, will cause the switch to loose the FRR config in case of reload.
vtysh conf t ! bfd ! router ospf ospf router-id 10.0.0.11 log-adjacency-changes auto-cost reference-bandwidth 100000 ! interface Ethernet72 ip ospf area 0.0.0.1 ip ospf bfd ip ospf network point-to-point ! interface Ethernet76 ip ospf area 0.0.0.1 ip ospf bfd ip ospf network point-to-point ! interface Loopback0 ip ospf area 0.0.0.1 ! interface Loopback1 ip ospf area 0.0.0.1 end write memory exit config routing_config_mode split config save -y
Once everything is configured, from FRR we can check our routing:
SONIC-Leaf301# show ip ospf neighbor Neighbor ID Pri State Dead Time Address Interface RXmtL RqstL DBsmL 10.0.0.1 1 Full/DROther 33.775s 10.0.0.1 Ethernet72:10.0.0.11 0 0 0 10.0.0.2 1 Full/DROther 33.968s 10.0.0.2 Ethernet76:10.0.0.11 0 0 0 SONIC-Leaf301# show ip route Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP, T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR, f - OpenFabric, > - selected route, * - FIB route, q - queued route, r - rejected route, # - not installed in hardware O>* 10.0.0.1/32 [110/11] via 10.0.0.1, Ethernet72 onlink, 00:03:08 O>* 10.0.0.2/32 [110/11] via 10.0.0.2, Ethernet76 onlink, 00:03:18 C * 10.0.0.11/32 is directly connected, Ethernet76, 00:05:08 C * 10.0.0.11/32 is directly connected, Ethernet72, 00:05:08 O 10.0.0.11/32 [110/10] via 0.0.0.0, Loopback0 onlink, 00:05:14 C>* 10.0.0.11/32 is directly connected, Loopback0, 00:05:15 O>* 10.0.0.12/32 [110/12] via 10.0.0.1, Ethernet72 onlink, 00:03:08 * via 10.0.0.2, Ethernet76 onlink, 00:03:08 O>* 10.0.0.13/32 [110/12] via 10.0.0.1, Ethernet72 onlink, 00:03:08 * via 10.0.0.2, Ethernet76 onlink, 00:03:08 O>* 10.0.0.14/32 [110/12] via 10.0.0.1, Ethernet72 onlink, 00:03:08 * via 10.0.0.2, Ethernet76 onlink, 00:03:08 O>* 10.10.10.2/31 [110/12] via 10.0.0.1, Ethernet72 onlink, 00:03:08 * via 10.0.0.2, Ethernet76 onlink, 00:03:08 O 11.11.11.111/32 [110/10] via 0.0.0.0, Loopback1 onlink, 00:05:14 C>* 11.11.11.111/32 is directly connected, Loopback1, 00:05:15 O>* 11.11.11.113/32 [110/12] via 10.0.0.1, Ethernet72 onlink, 00:03:08 * via 10.0.0.2, Ethernet76 onlink, 00:03:08 SONIC-Leaf301# ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.186 ms ^C --- 10.0.0.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1011ms rtt min/avg/max/mdev = 0.186/0.214/0.243/0.032 ms SONIC-Leaf301# ping 10.0.0.2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms ^C --- 10.0.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms admin@SONIC-Leaf301:~$ traceroute 10.0.0.13 traceroute to 10.0.0.13 (10.0.0.13), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.218 ms 10.0.0.2 (10.0.0.2) 0.178 ms 10.0.0.1 (10.0.0.1) 0.126 ms 2 10.0.0.13 (10.0.0.13) 0.439 ms 0.461 ms 0.468 ms
Now that every loopback is reachable (and we see also ECMP across the two spines) is time to configure MC-LAG between or leafs as well as underlay routing across the peer-link. This step can be done successfully only now because the MC-LAG peer has to be reachable via the fabric.
config portchannel add PortChannel1 config interface mtu PortChannel1 9216 config interface mtu Ethernet48 9216 config interface description Ethernet48 "Peer-link" config interface startup Ethernet48 config interface mtu Ethernet52 9216 config interface description Ethernet52 "Peer-link" config interface startup Ethernet52 config portchannel member add PortChannel1 Ethernet48 config portchannel member add PortChannel1 Ethernet52 config mclag add 1 10.0.0.11 10.0.0.12 PortChannel1 config vlan add 3965 config vlan member add 3965 PortChannel1 config mclag unique-ip add Vlan3965 config interface ip add Vlan3965 10.10.10.0/31 vtysh conf t ! interface Vlan3965 ip ospf area 0.0.0.1 end write memory exit config save -y
Once done, we should see our additional OSPF peer and a working MC-LAG cluster
admin@SONIC-Leaf301:~$ vtysh Hello, this is FRRouting (version 7.2-sonic). Copyright 1996-2005 Kunihiro Ishiguro, et al. SONIC-Leaf301# show ip ospf neighbor Neighbor ID Pri State Dead Time Address Interface RXmtL RqstL DBsmL 10.0.0.1 1 Full/DROther 30.645s 10.0.0.1 Ethernet72:10.0.0.11 0 0 0 10.0.0.2 1 Full/DROther 30.899s 10.0.0.2 Ethernet76:10.0.0.11 0 0 0 10.0.0.12 1 Full/DR 36.717s 10.10.10.1 Vlan3965:10.10.10.0 0 0 0 SONIC-Leaf301# exit admin@SONIC-Leaf301:~$ sonic-cli SONIC-Leaf301# show mclag brief Domain ID : 1 Role : active Session Status : up Peer Link Status : up Source Address : 10.0.0.11 Peer Address : 10.0.0.12 Peer Link : PortChannel1 Keepalive Interval : 1 secs Session Timeout : 30 secs System Mac : 80:a2:35:81:dd:f0 Number of MLAG Interfaces:0
Everything works as expected, but we also faced yet another SONiC problem. We needed to configure interfaces and their IP addresses, OSPF and MC-LAG, to do this, we needed access to 3 different configuration shells (Linux CLI, VTYSH and sonic-cli) either for configuration or to be able to run show commands and verify our configs.
Configuring BGP-EVPN control plane
Now it’s time to configure BGP. As per our architecture, I’ll be configuring iBGP with Route Reflectors sitting on the Spines. To do so i’ll need FRR shell.
The spine config will look something similar to this one:
vtysh conf t ! router bgp 65000 bgp router-id 10.0.0.1 bgp log-neighbor-changes neighbor FABRIC peer-group neighbor FABRIC remote-as 65000 neighbor FABRIC update-source Loopback0 bgp listen range 10.0.0.0/24 peer-group FABRIC ! address-family l2vpn evpn neighbor FABRIC activate neighbor FABRIC route-reflector-client advertise-all-vni exit-address-family end exit
And the leafs, like this:
vtysh conf t ! router bgp 65000 bgp router-id 10.0.0.11 bgp log-neighbor-changes neighbor 10.0.0.1 remote-as 65000 neighbor 10.0.0.1 update-source Loopback0 neighbor 10.0.0.2 remote-as 65000 neighbor 10.0.0.2 update-source Loopback0 ! address-family l2vpn evpn neighbor 10.0.0.1 activate neighbor 10.0.0.2 activate advertise-all-vni advertise ipv4 unicast exit-address-family end exit
Once done, I should be able to see all peering formed on my spines:
SONIC-Spine31# show bgp l2vpn evpn summary BGP router identifier 10.0.0.1, local AS number 65000 vrf-id 0 BGP table version 0 RIB entries 16, using 3072 bytes of memory Peers 4, using 82 KiB of memory Peer groups 1, using 64 bytes of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd *10.0.0.11 4 65000 5 29 0 0 0 00:01:33 0 *10.0.0.12 4 65000 6 30 0 0 0 00:01:34 0 *10.0.0.13 4 65000 7 31 0 0 0 00:01:40 0 *10.0.0.14 4 65000 7 31 0 0 0 00:01:44 0 Total number of neighbors 4 - dynamic neighbor 4 dynamic neighbor(s), limit 100 Total number of neighbors established
At this point, the only missing piece is to configure the VTEP on the leaf switches as well as the anycast gateway’s mac-address; fortunately this is very simple and straight forward:
config vxlan add nve1 11.11.11.111
config vxlan evpn_nvo add nvo1 nve1
config ip anycast-mac-address add aa:aa:bb:bb:cc:cc
root@SONIC-Leaf301:/home/admin# show vxlan interface
VTEP Information:
VTEP Name : nve1, SIP : 11.11.11.111
NVO Name : nvo1, VTEP : nve1
Source interface : Loopback1
root@SONIC-Leaf301:/home/admin# show ip static-anycast-gateway
Configured Anycast Gateway MAC address: aa:aa:bb:bb:cc:cc
IPv4 Anycast Gateway MAC address: enable
In short…
We configured a fully functional fabric providing underlay connectivity and EVPNcontrol plane as follows:
- A unique loopbacks on every switch
- Each physical interface between spine and leaf as an ip unnumbered interface
- OSPF area 1 within the fabric
- MC-LAG and underlay peering across the peer-link
- iBGP EVPN between leaf and spines with RR on the spines themeselves
- Each MC-LAG pair as a unique Virtual VTEP.
We also noticed how while the configuration isn’t complicated by any mean, the need to move between multiple shells just to apply or verify configs can be very confusing to the end user. To be fair though, the SONiC community is working on improving this part by working on delivering a single unified shell.
FRR config is always very familiar as it resembles Cisco’s IOS cli; on the other end the basic Sonic CLI can be a bit frustrating at times, especially due to the fact that it’s case sensitive making typos easy to occur.
In the next blog post we will look how to actually configure VXLANs and server facing interfaces… stay tuned!
Super great post!!!
May I know the usage of “unique-ip” 10.10.10.0/31 in the topology?
========================
config vlan add 3965
config vlan member add 3965 PortChannel1
config mclag unique-ip add Vlan3965
config interface ip add Vlan3965 10.10.10.0/31
========================
Thanks~
LikeLike
Absolutely!
SONiC mclag implementation dictates that a vlan must be configured identically on both switches.
That command allows you to have a miss matching ip address config on the SVIs so that switch A is. 1 and switch B is. 2 for instance. Also it allows to establish L3 between peer switches across the peer link. Finally in the case of vxlan evpn, it overrides the default “active/active” behavior of the distributed anycast gateway (where both switches must have the same ip)
More information here https://github.com/Azure/SONiC/blob/b3e2d73667edf839fb6d405f65e6915e04f70c7d/doc/mclag/MCLAG_Enhancements_HLD.md#1_1_5-Unique-IP-for-supporting-L3-protocol-over-MCLAG-VLAN-interface-Requirements
LikeLike
The image you’re using doesn’t look very vanilla, which vendor did you go with?
LikeLike
This was a distribution from Broadcom.
This early version eventually developed into what Dell is selling/supporting today (including their management framework “sonic-cli”)
https://www.dell.com/en-us/work/shop/povw/sonic
LikeLike
Pingback: SONiC and White Box switches in the Enterprise DC! – Part 3 | Andrea Florio's Networking Blog