Stay organized with collections
Save and categorize content based on your preferences.
When you train a model using a Tabular Workflow, you are charged based on the
cost of the infrastructure and the dependent services. When you make inferences
with this model, you are charged based on the cost of the infrastructure.
The cost of the infrastructure depends on the following factors:
The number of machines you use. You can set associated parameters
during model training, batch inference, or online inference.
The type of machines you use. You can set this parameter
during model training, batch inference, or online inference.
The length of time the machines are in use.
If you train a model or make batch inferences, this is a measure
of the total processing time of the operation.
If you make online inferences, this is a measure of the time that
your model is deployed to an endpoint.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["# Pricing for Tabular Workflows\n\nWhen you train a model using a Tabular Workflow, you are charged based on the cost of the infrastructure and the dependent services. When you make inferences with this model, you are charged based on the cost of the infrastructure.\n\n\u003cbr /\u003e\n\nThe cost of the infrastructure depends on the following factors:\n\n- The number of machines you use. You can set associated parameters during model training, batch inference, or online inference.\n- The type of machines you use. You can set this parameter during model training, batch inference, or online inference.\n- The length of time the machines are in use.\n - If you train a model or make batch inferences, this is a measure of the total processing time of the operation.\n - If you make online inferences, this is a measure of the time that your model is deployed to an endpoint.\n\nTabular Workflows runs multiple dependent services in your project on your\nbehalf: [Dataflow](https://cloud.google.com/dataflow), [BigQuery](https://cloud.google.com/bigquery),\n[Cloud Storage](https://cloud.google.com/storage),\n[Vertex AI Pipelines](/vertex-ai/docs/pipelines/introduction),\n[Vertex AI Training](https://cloud.google.com/vertex-ai#section-9). These services charge you\ndirectly.\n\nExamples of training cost calculation\n-------------------------------------\n\n**Example 1: 110MB dataset in CSV format, trained for one hour with default hardware configuration.**\n\nThe cost breakdown for the default workflow with Architecture Search and\nTraining is as follows:\n\nOptionally, you can enable model distillation to reduce the resulting model size.\nThe cost breakdown is as follows:\n\n**Example 2: 1.84TB dataset in BigQuery, trained for 20 hours with hardware override.**\n\nThe hardware configuration for this example is as follows:\n\nThe cost breakdown for the default workflow with Architecture Search and\nTraining is as follows:"]]