健康狀態檢查,可為部署中的 VM 和 Google Kubernetes Engine (GKE) Pod 提供健康狀態檢查。
使用服務轉送 API、Mesh 或 Gateway 資源,以及 Route 資源。
使用負載平衡 API 時,全域轉送規則包含 VIP 位址、目標 Proxy 和網址對應。
xDS API 相容的補充 Proxy (如 Envoy) 可執行於用戶端 VM 執行個體或是 Kubernetes Pod。Cloud Service Mesh 可做為控制層,並使用 xDS API 與每個 Proxy 直接通訊。在資料平面中,應用程式會將流量傳送至轉送規則或 Mesh 資源中設定的 VIP 位址。補充 Proxy 或 gRPC 應用程式會攔截流量,並將其重新導向至適當的後端。
下圖顯示在 Compute Engine VM 或 GKE Pod 上執行的應用程式、Cloud Service Mesh 部署中的元件和流量流向。這張圖表會顯示 Cloud Service Mesh 和 Cloud Load Balancing 資源,用於決定流量路由。圖表顯示舊版負載平衡 API。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-02 (世界標準時間)。"],[],[],null,["# Cloud Service Mesh load balancing\n=================================\n\nCloud Service Mesh uses sidecar proxies or proxyless gRPC to deliver global load\nbalancing for your internal microservices. You can deploy internal microservices\n(sidecar proxy-based or proxyless gRPC-based) with instances in multiple\nregions. Cloud Service Mesh provides health, routing, and backend information to\nthe sidecar proxies or proxyless gRPC, enabling them to perform optimal traffic\nrouting to application instances in multiple cloud regions for a service.\n| **Note:** This guide only supports Cloud Service Mesh with Google Cloud APIs and does not support Istio APIs. For more information see, [Cloud Service Mesh overview](/service-mesh/docs/overview).\n\nIn the following diagram, user traffic enters a Google Cloud\ndeployment through an external global load balancer. The external load balancer\ndistributes traffic to the Front End microservice in either\n`us-central1` or `asia-southeast1`, depending on the location of the end user.\n\nThe internal deployment features three global microservices: Front End, Shopping\nCart, and Payments. Each service runs on managed instance groups (MIGs) in two\nregions, `us-central1` and `asia-southeast1`. Cloud Service Mesh uses a global\nload-balancing algorithm that directs traffic from the user in California to the\nmicroservices deployed in `us-central1`. Requests from the user in\nSingapore are directed to the microservices in `asia-southeast1`.\n\nAn incoming user request is routed to the Front End microservice. The service\nproxy installed on the host with the Front End then directs traffic to the\nShopping Cart. The sidecar proxy installed on the host with the Shopping Cart\ndirects traffic to the Payments microservice. In a proxyless gRPC environment,\nyour gRPC application would handle traffic management.\n[](/static/service-mesh/docs/images/td-global-lb.svg) Cloud Service Mesh in a global load-balancing deployment (click to enlarge)\n\nIn the following example, if Cloud Service Mesh receives health check results\nthat indicate that the virtual machine (VM) instances running the Shopping Cart\nmicroservice in `us-central1` are unhealthy, Cloud Service Mesh instructs the\nsidecar proxy for the Front End microservices to fail over traffic to the Shopping\nCart microservice running in `asia-southeast1`. Because autoscaling is\nintegrated with traffic management in Google Cloud, Cloud Service Mesh\nnotifies the MIG in `asia-southeast1` of the additional traffic, and the MIG\nincreases in size.\n\nCloud Service Mesh detects that all backends of the Payments microservice are\nhealthy, so Cloud Service Mesh instructs Envoy's proxy for the Shopping Cart to\nsend a portion of the traffic---up to the customer's configured\ncapacity---to `asia-southeast1` and overflow the rest to `us-central1`.\n[](/static/service-mesh/docs/images/td-global-lb-failover.svg) Failover with Cloud Service Mesh in a global load-balancing deployment (click to enlarge)\n\nLoad-balancing components in Cloud Service Mesh\n-----------------------------------------------\n\nDuring Cloud Service Mesh setup, you configure several load-balancing\ncomponents:\n\n- The backend service, which contains configuration values.\n- A health check, which provides health checking for the VMs and Google Kubernetes Engine (GKE) Pods in your deployment.\n- With the service routing APIs, a `Mesh` or `Gateway` resource and a `Route` resource.\n- With the load balancing APIs, a global forwarding rule, which includes the VIP address, a target proxy, and a URL map.\n\nAn xDS API-compatible sidecar proxy (such as Envoy) runs on a client\nVM instance or in a Kubernetes Pod. Cloud Service Mesh serves as the control\nplane and uses xDS APIs to communicate directly with each proxy. In the data\nplane, the application sends traffic to the VIP address configured in the\nforwarding rule or `Mesh` resource. The sidecar proxy or your gRPC application\nintercepts the traffic and redirects it to the appropriate backend.\n\nThe following diagram shows an application running on Compute Engine VMs or\nGKE Pods, the components, and the traffic flow in a\nCloud Service Mesh deployment. It shows Cloud Service Mesh and the\nCloud Load Balancing resources that are used to determine traffic routing.\nThe diagram shows the older load balancing APIs.\n[](/static/service-mesh/docs/images/td-resources.svg) Cloud Service Mesh resources to be configured (click to enlarge)\n\nWhat's next\n-----------\n\n- To learn about configuring advance load balancing features, see [Advanced load balancing overview](/service-mesh/docs/service-routing/advanced-load-balancing-overview).\n- To learn more about service discovery and traffic interception, see [Cloud Service Mesh service discovery](/service-mesh/docs/traffic-management/service-discovery).\n- To learn more about Cloud Service Mesh with the service routing APIs, see the [overview](/service-mesh/docs/service-routing/overview)."]]