このガイドでは、Vertex AI SDK for Python のコードを PaLM API から Gemini API に移行する方法について説明します。Gemini では、テキスト、マルチターンの会話(チャット)、コードを生成できます。移行後、Gemini の出力が PaLM の出力と異なる可能性があります。レスポンスを必ず確認してください。
次のコードサンプルは、テキスト生成モデルの作成における PaLM API と Gemini API の違いを示しています。
PaLM
Gemini
fromvertexai.language_modelsimportTextGenerationModelmodel=TextGenerationModel.from_pretrained("text-bison@002")response=model.predict(prompt="The opposite of hot is")print(response.text)# 'cold.'
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-2.0-flash-001")responses=model.generate_content("The opposite of hot is")forresponseinresponses:print(response.text)
パラメータを使用したテキスト生成
次のコードサンプルでは、テキスト生成モデルの作成における PaLM API と Gemini API の違いを、オプションのパラメータとともに示しています。
PaLM
Gemini
fromvertexai.language_modelsimportTextGenerationModelmodel=TextGenerationModel.from_pretrained("text-bison@002")prompt="""You are an expert at solving word problems.Solve the following problem:I have three houses, each with three cats.each cat owns 4 mittens, and a hat. Each mitten wasknit from 7m of yarn, each hat from 4m.How much yarn was needed to make all the items?Think about it step by step, and show your work."""response=model.predict(prompt=prompt,temperature=0.1,max_output_tokens=800,top_p=1.0,top_k=40)print(response.text)
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-2.0-flash-001")prompt="""You are an expert at solving word problems.Solve the following problem:I have three houses, each with three cats.each cat owns 4 mittens, and a hat. Each mitten wasknit from 7m of yarn, each hat from 4m.How much yarn was needed to make all the items?Think about it step by step, and show your work."""responses=model.generate_content(prompt,generation_config={"temperature":0.1,"max_output_tokens":800,"top_p":1.0,"top_k":40,})forresponseinresponses:print(response.text)
チャット
次のコードサンプルは、チャットモデルの作成における PaLM API と Gemini API の違いを示しています。
PaLM
Gemini
fromvertexai.language_modelsimportChatModelmodel=ChatModel.from_pretrained("chat-bison@002")chat=model.start_chat()print(chat.send_message("""Hello! Can you write a 300 word abstract for a research paper I need to write about the impact of AI on society?"""))print(chat.send_message("""Could you give me a catchy title for the paper?"""))
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-2.0-flash-001")chat=model.start_chat()responses=chat.send_message("""Hello! Can you write a 300 word abstract for a research paper I need to write about the impact of AI on society?""")forresponseinresponses:print(response.text)responses=chat.send_message("""Could you give me a catchy title for the paper?""")forresponseinresponses:print(response.text)
コード生成
次のコードサンプルは、うるう年かどうかを予測する関数を生成する際の PaLM API と Gemini API の違いを示しています。
Codey
Gemini
fromvertexai.language_modelsimportCodeGenerationModelmodel=CodeGenerationModel.from_pretrained("code-bison@002")response=model.predict(prefix="Write a function that checks if a year is a leap year.")print(response.text)
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-2.0-flash-001")response=model.generate_content("Write a function that checks if a year is a leap year.")print(response.text)
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-02 UTC。"],[],[],null,["# Migrate from PaLM API to Gemini API on Vertex AI\n\nThis guide shows how to migrate Vertex AI SDK for Python code from using the PaLM\nAPi to using the Gemini API. You can generate text, multi-turn conversations\n(chat), and code with Gemini. After you migrate, check your responses because\nthe Gemini output might be different from PaLM output.\n\nGemini differences from PaLM\n----------------------------\n\nThe following are some differences between Gemini and PaLM models:\n\n- Their response structures are different. To learn about the Gemini response structure, see the\n [Gemini API model reference response body](/vertex-ai/generative-ai/docs/model-reference/gemini#response_body).\n\n- Their safety categories are different. To learn about differences between Gemini and PaLM safety settings, see\n [Key differences between Gemini and other model families](/vertex-ai/generative-ai/docs/multimodal/configure-safety-attributes#key_differences_between_gemini_and_other_model_families).\n\n- Gemini can't perform code completion. If you need to create a code completion\n application, use the `code-gecko` model. For more information, see\n [Codey code completion model](/vertex-ai/generative-ai/docs/code/test-code-completion-prompts).\n\n- For code generation, Gemini has a higher recitation block rate.\n\n- The confidence score in Codey code generation models that indicates how\n confident the model is in its response isn't exposed in Gemini.\n\nUpdate PaLM code to use Gemini models\n-------------------------------------\n\nThe methods on the `GenerativeModel` class are mostly the same as the methods on\nthe PaLM classes. For example, use `GenerativeModel.start_chat` to replace the\nPaLM equivalent, `ChatModel.start_chat`. However, because Google Cloud is always\nimproving and updating Gemini, you might run into some differences. For more\ninformation, see the\n[Python SDK Reference](/python/docs/reference/aiplatform/latest/vertexai)\n\nTo migrate from the PaLM API to the Gemini API, the following code modifications\nare required:\n\n- For all PaLM model classes, you use the `GenerativeModel` class in Gemini.\n\n- To use the `GenerativeModel` class, run the following import statement:\n\n `from vertexai.generative_models import GenerativeModel`\n- To load a Gemini model, use the `GenerativeModel` constructor instead of\n using the `from_pretrained` method. For example, to load the\n Gemini 1.0 Pro model, use\n `GenerativeModel(gemini-2.0-flash-001)`.\n\n- To generate text in Gemini, use the `GenerativeModel.generate_content` method\n instead of the `predict` method that's used on PaLM models. For example:\n\n```python\n model = GenerativeModel(\"gemini-2.0-flash-001\")\n response = model.generate_content(\"Write a short poem about the moon\")\n```\n\nGemini and PaLM class comparison\n--------------------------------\n\nEach PaLM model class is replaced by the `GenerativeModel` class in Gemini. The\nfollowing table shows the classes used by the PaLM models and their equivalent\nclass in Gemini.\n\nCommon setup instructions\n-------------------------\n\nFor both PaLM API and Gemini API in Vertex AI, the setup process is\nthe same. For more information, see\n[Introduction to the Vertex AI SDK for Python](/vertex-ai/docs/python-sdk/use-vertex-ai-python-sdk).\nThe following is a short code sample that installs the Vertex AI SDK for Python. \n\n```python\npip install google-cloud-aiplatform\nimport vertexai\nvertexai.init(project=\"\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e\", location=\"\u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e\")\n```\n\nIn this sample code, replace \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e with your Google Cloud project ID,\nand replace \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e with the location of your Google Cloud project\n(for example, `us-central1`).\n\nGemini and PaLM code samples\n----------------------------\n\nEach of the following pairs of code samples includes PaLM code and, next to it,\nGemini code that's been migrated from the PaLM code.\n\n### Text generation: basic\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for creating a text generation model.\n\n### Text generation with parameters\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for creating a text generation model, with optional [parameters](/vertex-ai/generative-ai/docs/start/quickstarts/api-quickstart#parameter_definitions).\n\n### Chat\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for creating a chat model.\n\n### Code generation\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for generating a function that predicts if a year is a leap year.\n\nMigrate prompts to Gemini models\n--------------------------------\n\nIf you have sets of prompts that you previously used with PaLM 2 models, you can\noptimize them for use with [Gemini models](/vertex-ai/generative-ai/docs/learn/models) by\nusing the\n[Vertex AI prompt optimizer (Preview)](/vertex-ai/generative-ai/docs/learn/prompts/prompt-optimizer).\n\nNext steps\n----------\n\n- See the [Google models](../learn/models) page for more details on the latest models and features."]]