[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-02。"],[],[],null,["# Deploy a model by using the Google Cloud console\n\nIn the Google Cloud console, you can create a\n[public endpoint](/vertex-ai/docs/predictions/choose-endpoint-type)\nand deploy a model to it.\n\nModels can be deployed from the\nOnline prediction page or the Model Registry\npage.\n\nDeploy a model from the Online prediction page\n----------------------------------------------\n\nIn the Online prediction page, you can create an endpoint and deploy\none or more models to it as follows:\n\n1. In the Google Cloud console, in the Vertex AI section, go\n to the **Online prediction** page.\n\n [Go to the Online prediction page](https://console.cloud.google.com/vertex-ai/online-prediction/endpoints)\n2. Click add **Create**.\n\n3. In the **New endpoint** pane:\n\n 1. Enter the **Endpoint name**.\n\n 2. Select **Standard** for the access type.\n\n 3. To create a dedicated (not shared) public endpoint, select the\n **Enable dedicated DNS** checkbox.\n\n 4. Click **Continue**.\n\n4. In the **Model settings** pane:\n\n 1. Select your model from the drop-down list.\n\n 2. Choose the model version from the drop-down list.\n\n 3. Enter the **Traffic split** percentage for the model.\n\n 4. Click **Done**.\n\n 5. Repeat these steps for any additional models to be deployed.\n\nDeploy a model from the Model Registry page\n-------------------------------------------\n\nIn the Model Registry page, you can deploy a model to one\nor more new or existing endpoints as follows:\n\n1. In the Google Cloud console, in the Vertex AI section, go\n to the **Models** page.\n\n [Go to the Models page](https://console.cloud.google.com/vertex-ai/models)\n2. Click the name and version ID of the model you want to deploy to open\n its details page.\n\n3. Select the **Deploy \\& Test** tab.\n\n If your model is already deployed to any endpoints, they are listed in the\n **Deploy your model** section.\n4. Click **Deploy to endpoint**.\n\n5. To deploy your model to a new endpoint:\n\n 1. Select radio_button_checked**Create new endpoint**\n 2. Provide a name for the new endpoint.\n 3. To create a dedicated (not shared) public endpoint, select the **Enable dedicated DNS** checkbox.\n 4. Click **Continue**.\n\n To deploy your model to an existing endpoint:\n 1. Select radio_button_checked**Add to existing endpoint**.\n 2. Select the endpoint from the drop-down list.\n 3. Click **Continue**.\n\n You can deploy multiple models to an endpoint, or you can deploy the\n same model to multiple endpoints.\n6. If you deploy your model to an existing endpoint that has one or more\n models deployed to it, you must update the **Traffic split** percentage\n for the model you are deploying and the already deployed models so that all\n of the percentages add up to 100%.\n\n7.\n If you're deploying your model to a new endpoint, accept 100 for the\n **Traffic split**. Otherwise, adjust the traffic split values for\n all models on the endpoint so they add up to 100.\n\n8. Enter the **Minimum number of compute nodes** you want to provide for\n your model.\n\n This is the number of nodes that need to be available to the model at all times.\n\n You are charged for the nodes used, whether to handle inference load or for\n standby (minimum) nodes, even without inference traffic. See the\n [pricing page](/vertex-ai/pricing).\n\n The number of compute nodes can increase if needed to handle inference\n traffic, but it will never go higher than the maximum number of nodes.\n9. To use autoscaling, enter the **Maximum number of compute nodes** you\n want Vertex AI to scale up to.\n\n10. Select your **Machine type**.\n\n Larger machine resources increase your inference performance and\n increase costs.\n [Compare the available machine types](/vertex-ai/docs/predictions/configure-compute#machine_type_comparison).\n11. Select an **Accelerator type** and an **Accelerator count**.\n\n If you enabled accelerator use when you [imported](/vertex-ai/docs/model-registry/import-model)\n or created the model, this option displays.\n\n For the accelerator count, refer to the [GPU\n table](/vertex-ai/docs/predictions/configure-compute#gpus) to check for valid numbers\n of GPUs that you can use with each CPU machine type. The accelerator\n count refers to the number of accelerators per node, not the total\n number of accelerators in your deployment.\n12. If you want to use a [custom service\n account](/vertex-ai/docs/general/custom-service-account) for the deployment, select\n a service account in the **Service account** drop-down box.\n\n13.\n Learn how to [change the\n default settings for inference logging](/vertex-ai/docs/predictions/online-prediction-logging#enabling-and-disabling).\n\n14.\n Click **Done** for your model, and when all the **Traffic split**\n percentages are correct, click **Continue**.\n\n The region where your model deploys is displayed. This\n must be the region where you created your model.\n\n \u003cbr /\u003e\n\n15.\n Click **Deploy** to deploy your model to the endpoint.\n\nWhat's next\n-----------\n\n- Learn how to [get an online inference](/vertex-ai/docs/predictions/get-online-predictions).\n- Learn how to [change the\n default settings for inference logging](/vertex-ai/docs/predictions/online-prediction-logging#enabling-and-disabling)."]]