在此示例代码中,将 PROJECT_ID 替换为您的 Google Cloud 项目 ID,并将 LOCATION 替换为您的 Google Cloud 项目的位置(例如 us-central1)。
Gemini 和 PaLM 代码示例
以下每对代码示例都包含 PaLM 代码,以及旁边从 PaLM 代码迁移过来的 Gemini 代码。
文本生成:基本
以下代码示例展示了用于创建文本生成模型的 PaLM API 和 Gemini API 之间的区别。
PaLM
Gemini
fromvertexai.language_modelsimportTextGenerationModelmodel=TextGenerationModel.from_pretrained("text-bison@002")response=model.predict(prompt="The opposite of hot is")print(response.text)# 'cold.'
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-1.0-pro")responses=model.generate_content("The opposite of hot is")forresponseinresponses:print(response.text)
使用参数生成文本
以下代码示例显示了用于创建文本生成模型的 PaLM API 和 Gemini API 之间的区别,其中提供可选的参数。
PaLM
Gemini
fromvertexai.language_modelsimportTextGenerationModelmodel=TextGenerationModel.from_pretrained("text-bison@002")prompt="""You are an expert at solving word problems.Solve the following problem:I have three houses, each with three cats.each cat owns 4 mittens, and a hat. Each mitten wasknit from 7m of yarn, each hat from 4m.How much yarn was needed to make all the items?Think about it step by step, and show your work."""response=model.predict(prompt=prompt,temperature=0.1,max_output_tokens=800,top_p=1.0,top_k=40)print(response.text)
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-1.0-pro")prompt="""You are an expert at solving word problems.Solve the following problem:I have three houses, each with three cats.each cat owns 4 mittens, and a hat. Each mitten wasknit from 7m of yarn, each hat from 4m.How much yarn was needed to make all the items?Think about it step by step, and show your work."""responses=model.generate_content(prompt,generation_config={"temperature":0.1,"max_output_tokens":800,"top_p":1.0,"top_k":40,})forresponseinresponses:print(response.text)
聊天
以下代码示例展示了用于创建聊天模型的 PaLM API 和 Gemini API 之间的区别。
PaLM
Gemini
fromvertexai.language_modelsimportChatModelmodel=ChatModel.from_pretrained("chat-bison@002")chat=model.start_chat()print(chat.send_message("""Hello! Can you write a 300 word abstract for a research paper I need to write about the impact of AI on society?"""))print(chat.send_message("""Could you give me a catchy title for the paper?"""))
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-1.0-pro")chat=model.start_chat()responses=chat.send_message("""Hello! Can you write a 300 word abstract for a research paper I need to write about the impact of AI on society?""")forresponseinresponses:print(response.text)responses=chat.send_message("""Could you give me a catchy title for the paper?""")forresponseinresponses:print(response.text)
代码生成
以下代码示例展示了 PaLM API 和 Gemini API 在生成判断某一年是否为闰年的函数时的差异。
Codey
Gemini
fromvertexai.language_modelsimportCodeGenerationModelmodel=CodeGenerationModel.from_pretrained("code-bison@002")response=model.predict(prefix="Write a function that checks if a year is a leap year.")print(response.text)
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-1.0-pro-002")response=model.generate_content("Write a function that checks if a year is a leap year.")print(response.text)
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-28。"],[],[],null,["# Migrate from PaLM API to Gemini API on Vertex AI\n\nThis guide shows how to migrate Vertex AI SDK for Python code from using the PaLM\nAPi to using the Gemini API. You can generate text, multi-turn conversations\n(chat), and code with Gemini. After you migrate, check your responses because\nthe Gemini output might be different from PaLM output.\n\nGemini differences from PaLM\n----------------------------\n\nThe following are some differences between Gemini and PaLM models:\n\n- Their response structures are different. To learn about the Gemini response structure, see the\n [Gemini API model reference response body](/vertex-ai/generative-ai/docs/model-reference/gemini#response_body).\n\n- Their safety categories are different. To learn about differences between Gemini and PaLM safety settings, see\n [Key differences between Gemini and other model families](/vertex-ai/generative-ai/docs/multimodal/configure-safety-attributes#key_differences_between_gemini_and_other_model_families).\n\n- Gemini can't perform code completion. If you need to create a code completion\n application, use the `code-gecko` model. For more information, see\n [Codey code completion model](/vertex-ai/generative-ai/docs/code/test-code-completion-prompts).\n\n- For code generation, Gemini has a higher recitation block rate.\n\n- The confidence score in Codey code generation models that indicates how\n confident the model is in its response isn't exposed in Gemini.\n\nUpdate PaLM code to use Gemini models\n-------------------------------------\n\nThe methods on the `GenerativeModel` class are mostly the same as the methods on\nthe PaLM classes. For example, use `GenerativeModel.start_chat` to replace the\nPaLM equivalent, `ChatModel.start_chat`. However, because Google Cloud is always\nimproving and updating Gemini, you might run into some differences. For more\ninformation, see the\n[Python SDK Reference](/python/docs/reference/aiplatform/latest/vertexai)\n\nTo migrate from the PaLM API to the Gemini API, the following code modifications\nare required:\n\n- For all PaLM model classes, you use the `GenerativeModel` class in Gemini.\n\n- To use the `GenerativeModel` class, run the following import statement:\n\n `from vertexai.generative_models import GenerativeModel`\n- To load a Gemini model, use the `GenerativeModel` constructor instead of\n using the `from_pretrained` method. For example, to load the\n Gemini 1.0 Pro model, use\n `GenerativeModel(gemini-2.0-flash-001)`.\n\n- To generate text in Gemini, use the `GenerativeModel.generate_content` method\n instead of the `predict` method that's used on PaLM models. For example:\n\n```python\n model = GenerativeModel(\"gemini-2.0-flash-001\")\n response = model.generate_content(\"Write a short poem about the moon\")\n```\n\nGemini and PaLM class comparison\n--------------------------------\n\nEach PaLM model class is replaced by the `GenerativeModel` class in Gemini. The\nfollowing table shows the classes used by the PaLM models and their equivalent\nclass in Gemini.\n\nCommon setup instructions\n-------------------------\n\nFor both PaLM API and Gemini API in Vertex AI, the setup process is\nthe same. For more information, see\n[Introduction to the Vertex AI SDK for Python](/vertex-ai/docs/python-sdk/use-vertex-ai-python-sdk).\nThe following is a short code sample that installs the Vertex AI SDK for Python. \n\n```python\npip install google-cloud-aiplatform\nimport vertexai\nvertexai.init(project=\"\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e\", location=\"\u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e\")\n```\n\nIn this sample code, replace \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e with your Google Cloud project ID,\nand replace \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e with the location of your Google Cloud project\n(for example, `us-central1`).\n\nGemini and PaLM code samples\n----------------------------\n\nEach of the following pairs of code samples includes PaLM code and, next to it,\nGemini code that's been migrated from the PaLM code.\n\n### Text generation: basic\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for creating a text generation model.\n\n### Text generation with parameters\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for creating a text generation model, with optional [parameters](/vertex-ai/generative-ai/docs/start/quickstarts/api-quickstart#parameter_definitions).\n\n### Chat\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for creating a chat model.\n\n### Code generation\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for generating a function that predicts if a year is a leap year.\n\nMigrate prompts to Gemini models\n--------------------------------\n\nIf you have sets of prompts that you previously used with PaLM 2 models, you can\noptimize them for use with [Gemini models](/vertex-ai/generative-ai/docs/learn/models) by\nusing the\n[Vertex AI prompt optimizer (Preview)](/vertex-ai/generative-ai/docs/learn/prompts/prompt-optimizer).\n\nNext steps\n----------\n\n- See the [Google models](../learn/models) page for more details on the latest models and features."]]