Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, see Model versions and lifecycle.
Optional. Positive values penalize tokens that repeatedly appear in the generated text, decreasing the probability of repeating content. This maximum value for frequencyPenalty is up to, but not including, 2.0. Its minimum value is -2.0. Supported by gemini-1.5-pro and gemini-1.5-flash only.
maxOutputTokens
maxOutputTokens?:number;
Optional. The maximum number of output tokens to generate per message.
responseMimeType
responseMimeType?:string;
Optional. Output response mimetype of the generated candidate text. Supported mimetype: - text/plain: (default) Text output. - application/json: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Interface GenerationConfig (1.9.0)\n\nConfiguration options for model generation and outputs.\n\nPackage\n-------\n\n[@google-cloud/vertexai](../overview.html)\n\nProperties\n----------\n\n### candidateCount\n\n candidateCount?: number;\n\nOptional. Number of candidates to generate.\n\n### frequencyPenalty\n\n frequencyPenalty?: number;\n\nOptional. Positive values penalize tokens that repeatedly appear in the generated text, decreasing the probability of repeating content. This maximum value for frequencyPenalty is up to, but not including, 2.0. Its minimum value is -2.0. Supported by gemini-1.5-pro and gemini-1.5-flash only.\n\n### maxOutputTokens\n\n maxOutputTokens?: number;\n\nOptional. The maximum number of output tokens to generate per message.\n\n### responseMimeType\n\n responseMimeType?: string;\n\nOptional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined.\n\n### responseSchema\n\n responseSchema?: ResponseSchema;\n\nOptional. The schema that generated candidate text must follow. For more information, see \u003chttps://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/control-generated-output\u003e. If set, a compatible responseMimeType must also be set.\n\n### stopSequences\n\n stopSequences?: string[];\n\nOptional. Stop sequences.\n\n### temperature\n\n temperature?: number;\n\nOptional. Controls the randomness of predictions.\n\n### topK\n\n topK?: number;\n\nOptional. If specified, topK sampling will be used.\n\n### topP\n\n topP?: number;\n\nOptional. If specified, nucleus sampling will be used."]]