[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-08-21 (世界標準時間)。"],[[["\u003cp\u003eApigee hybrid installations benefit from dedicated node pools, typically one for Cassandra pods (stateful) and one for other runtime pods (stateless).\u003c/p\u003e\n"],["\u003cp\u003eBy default, the installer uses \u003ccode\u003eapigee-data\u003c/code\u003e for the stateful node pool and \u003ccode\u003eapigee-runtime\u003c/code\u003e for the stateless node pool, simplifying pod assignment.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003erequiredForScheduling\u003c/code\u003e property determines if the installation requires these named node pools, where setting it to \u003ccode\u003etrue\u003c/code\u003e ensures that pods are scheduled correctly, or the installation will fail.\u003c/p\u003e\n"],["\u003cp\u003eCustom node pool names can be used by modifying the \u003ccode\u003enodeSelector\u003c/code\u003e configuration, allowing for flexibility beyond the default \u003ccode\u003eapigee-data\u003c/code\u003e and \u003ccode\u003eapigee-runtime\u003c/code\u003e names.\u003c/p\u003e\n"],["\u003cp\u003eOn Anthos, which does not support the node pool feature, worker nodes must be manually labeled to differentiate between runtime and data nodes.\u003c/p\u003e\n"]]],[],null,["# Configuring dedicated node pools\n\n| You are currently viewing version 1.7 of the Apigee hybrid documentation. **This version is end of life.** You should upgrade to a newer version. For more information, see [Supported versions](/apigee/docs/hybrid/supported-platforms#supported-versions).\n\nAbout node pools\n----------------\n\nA [node pool](https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools)\nis a group of nodes within a cluster that all have the same configuration.\nTypically, you define separate node pools when you have pods with differing resource requirements.\nFor example, the `apigee-cassandra` pods require persistent storage, while\nthe other Apigee hybrid pods do not.\n\n\nThis topic discusses how to configure dedicated node pools for a hybrid installation.\n\nUsing the default nodeSelectors\n-------------------------------\n\n\nThe best practice is to set up two dedicated node pools: one for the Cassandra\npods and one for all the other runtime pods. Using default\n[nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) configurations, the\ninstaller will assign the Cassandra pods to a *stateful* node pool named `apigee-data` and all\nthe other pods to a *stateless* node pool named `apigee-runtime`. All you have to do is\ncreate node pools with these names, and Apigee hybrid handles the pod scheduling details\nfor you:\n\n\nFollowing is the default `nodeSelector` configuration. The `apigeeData`\nproperty specifies a node pool for the Cassandra pods. The `apigeeRuntime` specifies the node\npool for all the other pods. You can override these default\nsettings in your overrides file, as explained later in this topic: \n\n```javascript\nnodeSelector:\n requiredForScheduling: false\n apigeeRuntime:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-runtime\"\n apigeeData:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-data\"\n```\n\n\nAgain, to ensure your pods are scheduled on the correct nodes, all you have to do is\ncreate two node pools with the names `apigee-data` and `apigee-runtime`.\n\nThe requiredForScheduling property\n----------------------------------\n\n\nThe `nodeSelector` config section has a property called\n`requiredForScheduling`: \n\n```javascript\nnodeSelector:\n requiredForScheduling: false\n apigeeRuntime:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-runtime\"\n apigeeData:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-data\"\n```\n\n\nIf set to `false` (the default), underlying pods will be scheduled whether or not node pools\nare defined with the required names. This means that if you forget to create node pools\nor if you accidentally name a node pool other than `apigee-runtime` or\n`apigee-data`, the hybrid runtime installation will succeed. Kubernetes\nwill decide where to run your pods.\n\nIf you set `requiredForScheduling` to `true`, the installation will fail\nunless there are node pools that match the configured `nodeSelector` keys and values.\n| **Note:** The best practice is to set this value to `requiredForScheduling:true` for a production environment.\n\nUsing custom node pool names\n----------------------------\n\n\nIf you don't want to use node pools with the default names, you can create node pools with\ncustom names and specify those names in the\n`nodeSelector` stanza. For example, the following configuration assigns the\nCassandra pods to the pool named `my-cassandra-pool` and all other pods to the pool\nnamed `my-runtime-pool`: \n\n```javascript\nnodeSelector:\n requiredForScheduling: false\n apigeeRuntime:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"my-runtime-pool\"\n apigeeData:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"my-cassandra-pool\"\n```\n\nOverriding the node pool for specific components on GKE\n-------------------------------------------------------\n\n\nYou can also override node pool configurations\nat the individual component level. For example, the following configuration assigns\nthe node pool with the value\n`apigee-custom` to the `runtime` component: \n\n```javascript\nruntime:\n nodeSelector:\n key: cloud.google.com/gke-nodepool\n value: apigee-custom\n```\n\nYou can specify a custom node pool on any of these components:\n\n- `istio`\n- `mart`\n- `synchronizer`\n- `runtime`\n- `cassandra`\n- `udca`\n- `logger`\n\nGKE node pool configuration\n---------------------------\n\nIn GKE, node pools must have a unique name that you provide when you create\nthe pools, and GKE\nautomatically labels each node with the following: \n\n```javascript\ncloud.google.com/gke-nodepool=THE_NODE_POOL_NAME\n```\n\n\nAs long as you create node pools named `apigee-data` and `apigee-runtime`,\nno further configuration is required. If you want to use custom node names, see\n[Using custom node pool names](#using-custom-node-pool-names).\n\nAnthos node pool configuration\n------------------------------\n\nApigee hybrid currently is only supported on Anthos 1.1.1. This version of Anthos\ndoes not support the node pool\nfeature; therefore, you must manually label the\nworker nodes as explained below. Perform the following steps once your hybrid\ncluster is up and running:\n\n1. Run the following command to get a list of the worker nodes in your cluster: \n\n ```\n kubectl -n apigee get nodes\n ```\n\n\n Example output: \n\n ```javascript\n NAME STATUS ROLES AGE VERSION\n apigee-092d639a-4hqt Ready \u003cnone\u003e 7d v1.14.6-gke.2\n apigee-092d639a-ffd0 Ready \u003cnone\u003e 7d v1.14.6-gke.2\n apigee-109b55fc-5tjf Ready \u003cnone\u003e 7d v1.14.6-gke.2\n apigee-c2a9203a-8h27 Ready \u003cnone\u003e 7d v1.14.6-gke.2\n apigee-c70aedae-t366 Ready \u003cnone\u003e 7d v1.14.6-gke.2\n apigee-d349e89b-hv2b Ready \u003cnone\u003e 7d v1.14.6-gke.2\n ```\n2. Label each node to differentiate between runtime nodes and data nodes. **Note:** Be sure to choose the nodes so that they are equally distributed among availability zones (AZs).\n\n\n Use this command to label the nodes: \n\n ```\n kubectl label node NODE_NAME KEY=VALUE\n ```\n\n\n For example: \n\n ```javascript\n $ kubectl label node apigee-092d639a-4hqt apigee.com/apigee-nodepool=apigee-runtime\n $ kubectl label node apigee-092d639a-ffd0 apigee.com/apigee-nodepool=apigee-runtime\n $ kubectl label node apigee-109b55fc-5tjf apigee.com/apigee-nodepool=apigee-runtime\n $ kubectl label node apigee-c2a9203a-8h27 apigee.com/apigee-nodepool=apigee-data\n $ kubectl label node apigee-c70aedae-t366 apigee.com/apigee-nodepool=apigee-data\n $ kubectl label node apigee-d349e89b-hv2b apigee.com/apigee-nodepool=apigee-data\n ```\n\nOverriding the node pool for specific components on Anthos GKE\n--------------------------------------------------------------\n\n\nYou can also override node pool configurations\nat the individual component level for an Anthos GKE installation. For example, the following\nconfiguration assigns\nthe node pool with the value\n`apigee-custom` to the `runtime` component: \n\n```javascript\nruntime:\n nodeSelector:\n key: apigee.com/apigee-nodepool\n value: apigee-custom\n```\n\nYou can specify a custom node pool on any of these components:\n\n- `istio`\n- `mart`\n- `synchronizer`\n- `runtime`\n- `cassandra`\n- `udca`\n- `logger`"]]