NSX-T Edge VM Design - Single N-VDS Edge VM on N-VDS with 4 pNICs
In a small Data Center, it's common to find clusters with management
components such as vCenter, NSX Manager, vRealize Automation, vRealize
Network Insight, vRealize Log Insight, and so on. This is what we called
a "Shared" Management and Compute cluster. Edge VMs can be deployed in a
shared cluster.
In the following diagram, the hosts are ESXi based and they have 4 pNICs available or more on each.
In this scenario, we are going to dedicated two (2) pNICs for management components and two (2) other pNICs for workloads traffic (or compute) and Edge VMs.
I use the following VLAN information in my setup for the Edge VM configuration.
This design is available since NSX-T 2.5 release. The single N-VDS provides multi-TEP capabilities.
You can ping these Edge VMs from the Transport Node with the "vmkping ++netstack=vxlan" command.
Note: Because of the Multi-TEP support, each Edge VM has two (2) IP addresses for overlay traffic.
Enjoy your new NSX setup !
In the following diagram, the hosts are ESXi based and they have 4 pNICs available or more on each.
In this scenario, we are going to dedicated two (2) pNICs for management components and two (2) other pNICs for workloads traffic (or compute) and Edge VMs.
I use the following VLAN information in my setup for the Edge VM configuration.
- Management VLAN: 599
- vMotion VLAN: 598
- TEP VLAN for Compute: 596
- TEP VLAN for Edge VMs: 595
- Uplink1 Trunk for Edge VM: 0-4094
- Uplink2 Trunk for Edge VM: 0-4094
N-VDS Edge VM Diagram
We are going to configure the "Single N-VDS Edge VM Design on N-VDS with four (4) pNICs".This design is available since NSX-T 2.5 release. The single N-VDS provides multi-TEP capabilities.
Compute Configuration
- The host has four (4) pNICs available
- A VDS for the host is deployed "Management-VDS" for management
- A N-VDS for the compute will be deployed "HOST-NVDS"
- Two (2) pNICs are used for redundancy and load balancing on "Management-VDS"
- Two (2) pNICs are used for redundancy and load balancing on "HOST-NVDS"
- A TEP IP pool and VLAN are defined for Comput
- The compute's Uplink Profile has the teaming policies (This is an example, you can adjust them)
- Load Balance for Overlay traffic using multi-TEP support
Edge VM Configuration
- An Edge VM has four (4) interfaces available
- eth0 is dedicated to management traffic
- fp-eth0 used for overlay and uplink1 traffic
- fp-eth1 used for overlay and uplink2 traffic
- fp-eth2 not used
- A single N-VDS will be defined for the Edge VM
- The "HOST-NVDS" N-VDS is used for the Overlay traffic and uplink traffic
- A TEP IP pool and VLAN are defined for Edge VMs
- The Edge VM's Uplink Profile
- Load Balance for Overlay traffic using multi-TEP support
- Failover for Uplink1 traffic (Primary U1, Standby None)
- Failover for Uplink2 traffic (Primary U2, Standby None)
Configuration
Step 1 - Transport Zones and N-VDS
One (1) N-VDS is created "HOST-NVDS" for this design. Two (2) Transport Zones are required: "Shared-Overlay-TZ" and "Share-VLAN-TZ".Step 2 - Uplink Profiles
Two (2) uplink profiles are created:- Shared-Compute-2pNICs for the Compute (or Transport Node)
- Shared-Edge-2pNICs for Edge VM's Overlay and Uplink traffic
Step 3 - Segment Creation
Here is the list of required segments for this setup.Steps 4 and 5 - Compute and Edge VM N-VDS Deployment
We can now deploy the N-VDS for the Compute and the Edge VMResults
The following picture shows two (2) Edge VMs deployed with success. Each Edge VM has one (1) N-VDS as mentioned above.You can ping these Edge VMs from the Transport Node with the "vmkping ++netstack=vxlan" command.
Note: Because of the Multi-TEP support, each Edge VM has two (2) IP addresses for overlay traffic.
Enjoy your new NSX setup !
Comments
Post a Comment