During the COVID-19 lockdown I’ve had some time to finally learn and build a cross site “routed” NSX-V lab environment! NSX allows for microsegementation and Layer 2 over Layer 3 with VXLAN. You can also provide distributed virtual firewalling out of the box. Below is a summary of the lab design and notes. I will link to the sources I used for reference.

ESXi Physical Host
This lab is for a cross VCenter NSX-V which can all be done nested and virtually. I decided to build nested on an ESXi 6.7 HP DL380 G7 /w 2x Xeon CPU, 120gb of RAM and 3TB of disk. To setup the nested switching environment I created three vswitches on the physical lab esxi host. One is for MGMT of the host and access to the lab from the physical network. The other two vswitches are for the nested lab (no physical uplinks), one for each site, in my lab we have sites PER and SYD (Perth and Sydney) part of the fictional LAB CORP. Using a port group with VLAN tag of 4095 gives you a dot1q trunk port within ESXi, which allows you to use VLANs effectivley with an external router.
I have deployed the following VMs to the physical host:
– 2 x PFsense routers for simulated L3 routing, DC cross connect or WAN and Internet Uplink for each site.
– 6 x ESXi 6.7u2 nested hosts. 3 for each DC.
– 2 x VCSA 6.7 (VCenter) one for each site – one 3 x host cluster per site.
– 2 x Windows Domain DC for lab.internal domain – provide DNS, DHCP, NTP etc services.
Cross VCenter
Logical Networking:
We have the following network subnets.
Perth – Site A
MGMT 10.0.0.0/24
NSX Transit 192.168.8.0/24
DC Interconnect 10.2.0.1/30
NSX Routed Uplink 192.168.2.0/24
VXLAN VMKernel 192.168.3.0/24

Sydney – Site B
MGMT 10.3.0.0/24
NSX Transit 192.168.9.0/24
DC Interconnect 10.2.0.2/30
NSX Routed Uplink 192.168.7.0/24
VXLAN VMKernel 192.168.22.0/24

NSX Subnets (Universal Logical Switches):
Web Tier – 172.16.50.0/24
App Tier – 172.16.60.0/24
DB Tier – 172.16.70.0/24
As this is not production and just a lab you can see I have used many /24’s for simplicity sake. The L3 router is PFSense at each location with a L2 interconnect between to the two sites to simulate a private WAN service. The goal with this lab is to be able to stetch Layer 2 domain across Router L3 uplinks using VXLAN as the encapsulation protocol.
NSX-V Lab Small

NSX Lab Architecture:
2 x NSX Managers – Perth is Primary, Sydney is Secondary.
3 x Controller nodes hosted in the Perth cluster.
Once NSX is setup on the Perth cluster set it as primary and then add the Sydney manager and change it to secondary.
Control plane mode is set to unicast due to lab nature (setting transport zone under nsx logical network settings).
Create universal logical switches for App, Web, DB, Transit-Per, Transit-Syd and HA.
1 x Universal Distributed Logical Router with Local Egress Enabled. Two control VMs one per site, each site uses its on transit logical switch.
North south routing will be via the local ESG at each site. This lab is using active / active egress.
2 x Edge services gateways, one per site. In the lab environment I used OSPF on ESG to peer routes between the PFsense Router and the UDLR control vm for each site.
2 x DVS for each site. One is MgmtDVS the other NSX Overlay. Set the NSX DVS to have default MTU of 1600.
VXLAN MTU on the VMkernel must be at least 1600 bytes. The Pfense interfaces will also need this. I also set the nested vSwitch on the physical ESXi host to support 1600 MTU.
Be sure to disable the automatic NAT on your PFSense routers. This caused me issues when VXLAN was trying to operate cross site. The issue was the auto NAT was translating the VXLAN traffic source IP, essentially breaking it.
VM East West traffic that is destined for local site VM will route via the logical switch / site local UDLR (this is called local egress). Traffic destined for VMs on a different site will use VXLAN to encapsulate layer 2 over the L3 path.
Each sites PFSense router has its own Internet connection. Each sites ESG has a default route pointing to PFSense interface for Internet connectivity. The following screenshots are of ‘show ip route’ output on both Perth and Sydney ESG’s:
per-edge-ip-route
syd-edge-ip-route

Testing VXLAN:
SSH to a nested ESXi host (enable SSH first). Use the vmkping command in vxlan mode to test the MTU and path connectivity. In this example we are using an ESXi host on Sydney cluster to ping vxlan vmk on a Perth host.
[root@esxi-nest-01b:~] vmkping ++netstack=vxlan -I vmk2 192.168.22.22 -s 1572 -d
PING 192.168.22.22 (192.168.22.22): 1572 data bytes
1580 bytes from 192.168.22.22: icmp_seq=0 ttl=64 time=1.172 ms

Resources I followed to build this lab:
Sivar Sankars NSX-V Series
Rutger Blom Cross VCenter Series
vDives NSX Routing
Jeffery Kusters Blog

It should be noted that VMware will retire NSX-V in the future. The clear sucessor is NSX-T. The reason I used NSX-V in this lab is that its a good starting point learning overlay networking technologies. It’s also hard to get the NSX-T OVA’s without a subscription / license on the VMware download portal. All in all never having used NSX-V before, but reading and knowing about the concepts, it took me around a week to get this lab perfect. A fun and rewarding exercise during the corona lockdown.