Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, see Model versions and lifecycle.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-02 UTC."],[],[],null,["# LoRA and QLoRA recommendations for LLMs\n\nThis page gives you configuration recommendations for tuning large language\nmodels (LLM) on Vertex AI by using\n[Low-Rank Adaptation of Large Language Models (LoRA)](https://arxiv.org/abs/2106.09685)\nand its more memory-efficient version,\n[QLoRA](https://arxiv.org/abs/2305.14314).\n\nTuning recommendations\n----------------------\n\nThe following table summarizes our recommendations for tuning LLMs by using LoRA\nor QLoRA:\n\n\u003cbr /\u003e"]]