[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-03。"],[],[],null,["# VLANs and subnets on VMware Engine\n==================================\n\nGoogle Cloud VMware Engine creates a network per region in which your\nVMware Engine service is deployed. The network is a single TCP Layer\n3 address space with routing enabled by default. All private clouds and subnets\ncreated in this region can communicate with each other without any additional\nconfiguration. You can create network segments (subnets) using NSX for your\nworkload virtual machines (VMs).\n\nManagement VLANs\n----------------\n\nGoogle creates a VLAN (Layer 2 network) for each private cloud. The Layer 2\ntraffic stays within the boundary of a private cloud, letting you isolate the\nlocal traffic within the private cloud. These VLANs are used for the management\nnetwork. For workload VMs, you must create network segments on NSX Manager for\nyour private cloud.\n\nSubnets\n-------\n\nYou must create a network segment on the NSX manager for your private cloud. A\nsingle private Layer 3 address space is assigned per customer and region. You\ncan configure any IP address range that doesn't overlap with other networks in\nyour private cloud, your on-premises network, your private cloud management\nnetwork, or subnet IP address ranges in your Virtual Private Cloud (VPC)\nnetwork. For a detailed breakdown of how VMware Engine allocates\nsubnet IP address ranges, see [Networking requirements](/vmware-engine/docs/quickstart-networking-requirements).\n\nAll subnets can communicate with each other by default, reducing the\nconfiguration overhead for routing between private cloud. East-west data across\nprivate clouds in the same region stays in the same Layer 3 network and\ntransfers over the local network infrastructure within the region. No egress is\nrequired for communication between private clouds in a region. This approach\neliminates any WAN/egress performance penalty in deploying different workloads\nin different private clouds of the same project.\n\n### Management subnets created on a private cloud\n\nWhen you create a private cloud, VMware Engine creates the following\nmanagement subnets:\n\n- **System management:** VLAN and subnet for ESXi hosts' management network, DNS server, vCenter Server\n- **VMotion:** VLAN and subnet for ESXi hosts' vMotion network\n- **VSAN:** VLAN and subnet for ESXi hosts' vSAN network\n- **NsxtEdgeUplink1:** VLAN and subnet for VLAN uplinks to an external network\n- **NsxtEdgeUplink2:** VLAN and subnet for VLAN uplinks to an external network\n- **HCXUplink:** Used by HCX IX (mobility) and NE (extension) appliances to reach their peers and enable the creation of the HCX Service Mesh.\n- **NsxtHostTransport:** VLAN and subnet for host transport zone\n\n### HCX deployment network CIDR range\n\nWhen you create a private cloud on VMware Engine,\nVMware Engine automatically installs HCX on the private cloud. You no\nlonger need to specify a dedicated CIDR range for HCX components. Instead,\nVMware Engine automatically allocates the required network space for\nHCX components (such as HCX Manager, vMotion, and WAN Uplink) from the\nmanagement CIDR range you specify for your private cloud.\n\nService subnets\n---------------\n\nWhen you create a private cloud, VMware Engine automatically creates\nadditional service subnets. You can target service subnets for appliance or\nservice deployment scenarios, such as storage, backup, disaster recover (DR),\nmedia streaming, and providing high scale linear throughput and packet\nprocessing for even the largest scaled private clouds. The service subnet names\nare as follows:\n\n- `service-1`\n- `service-2`\n- `service-3`\n- `service-4`\n- `service-5`\n\nVirtual Machine communication across a service subnet exits the **VMware ESXi** host directly into the Google Cloud networking infrastructure, enabling high speed communication.\n| **Note:** NSX gateway and distributed firewall rules won't apply to any service subnets.\n\n### Configuring service subnets\n\nWhen VMware Engine creates a service subnet, it does not allocate a\nCIDR range or prefix. You must specify a non-overlapping CIDR range and prefix.\nThe first usable address will become the gateway address. To allocate a CIDR\nrange and prefix, edit one of the service subnets.\n\nService subnets can be updated if CIDR requirements change. Modification of an\nexisting service subnet CIDR may cause network availability disruption for VMs\nattached to that service subnet.\n\n### Configuring vSphere distributed port groups\n\nTo connect a VM to a service subnet, you need to create a new Distributed Port Group. This group maps the service subnet ID to a network name within a vCenter private cloud.\n\nTo do this, navigate to the network configuration section of the vCenter interface, select **Datacenter-dvs** , and then select **New Distributed Port Group**.\n\nAfter the distributed port group has been created, you can attach VMs by selecting the corresponding name in the network configuration of the VM properties.\n\nThe following are Distributed Port Group critical configuration values:\n\n- **Port binding**: static binding\n- **Port allocation**: elastic\n- **Number of ports**: 120\n- **VLAN type**: VLAN\n- **VLAN ID**: the corresponding subnet ID within the subnets section of the Google Cloud VMware Engine interface\n\nRecommended MTU settings\n------------------------\n\nThe [maximum transmission unit (MTU)](https://wikipedia.org/wiki/Maximum_transmission_unit) is the size, in\nbytes, of the largest packet supported by a network layer protocol, including\nboth headers and data. To avoid fragmentation-related issues, we recommend the\nfollowing MTU settings:\n\n- For VMs that communicate only with other endpoints within a standard private\n cloud, you can use MTU settings up to 8800 bytes.\n\n- For VMs that communicate only with other endpoints within a stretched private\n cloud, you can use MTU settings up to 8600 bytes.\n\n- For VMs that communicate to or from a private cloud **without** encapsulation,\n use the standard 1500 byte MTU setting. This common default setting is\n valid for VM interfaces that send traffic in the following ways:\n\n - From a VM in a private cloud to a VM in another private cloud\n - From an on-premises endpoint to a private cloud\n - From a VM in a private cloud to an on-premises endpoint\n - From the internet to a private cloud\n - From a VM in a private cloud to the internet\n- For VMs that communicate to or from the internet with large packet UDP\n traffic flows that are sensitive to fragmentation, use an MTU setting of\n 1370 bytes or lower. This recommendation applies to communications using\n public connections or IP addresses provided by VMware Engine. MSS\n clamping generally resolves fragmentation issues with TCP-based traffic flows.\n\n- For VMs that communicate to or from a private cloud **with** encapsulation,\n calculate the best MTU setting based on VPN endpoint configurations. This\n generally results in an MTU setting of 1350--1390 bytes or lower for VM\n interfaces that send traffic in the following ways:\n\n - From an on-premises endpoint to a private cloud with encapsulation\n - From a private cloud VM to an on-premises endpoint with encapsulation\n - From a VM in one private cloud to a VM in another private cloud with encapsulation\n\n| **Note:** The default MTU setting is 1440 bytes on the **HCX** uplink profile.\n\nThese recommendations are especially important in cases where an application\nisn't able to control the maximum payload size. For additional guidance on\ncalculating encapsulation overhead, see the following resources:\n\n- [Cloud VPN MTU considerations](/network-connectivity/docs/vpn/concepts/mtu-considerations)\n- [VMware NSX VPNs](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/administration/)\n- [Traffic Engineering in HCX Enterprise](https://cloud.vmware.com/community/2020/01/16/traffic-engineering-hcx-enterprise/)"]]