[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-02。"],[],[],null,["# Tabular Workflow for Wide & Deep\n\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis document provides an overview of the Tabular Workflow for Wide \\& Deep\npipelines and components. To train a model with Wide \\& Deep,\nsee [Train a model with Wide \\& Deep](/vertex-ai/docs/tabular-data/tabular-workflows/wide-and-deep-train).\n\n\n[Wide \\& Deep](https://arxiv.org/abs/1606.07792) jointly trains wide linear models and\ndeep neural networks. It combines the benefits of memorization and\ngeneralization. In some online experiments, the results showed that Wide \\& Deep\nsignificantly increased Google store application acquisitions compared with wide-only and deep-only models.\n\nBenefits\n--------\n\n\n- Integrated with Vertex AI. The trained model is a Vertex AI model. You can run batch inferences or deploy the model for online inferences right away.\n\n\u003cbr /\u003e\n\nWide \\& Deep on Vertex AI Pipelines\n-----------------------------------\n\nTabular Workflow for Wide \\& Deep is a managed instance of Vertex AI Pipelines.\n\n\n[Vertex AI Pipelines](/vertex-ai/docs/pipelines/introduction) is a serverless\nservice that runs Kubeflow pipelines. You can use pipelines to automate\nand monitor your machine learning and data preparation tasks. Each step in a\npipeline performs part of the pipeline's workflow. For example,\na pipeline can include steps to split data, transform data types, and train a model. Since steps\nare instances of pipeline components, steps have inputs, outputs, and a\ncontainer image. Step inputs can be set from the pipeline's inputs or they can\ndepend on the output of other steps within this pipeline. These dependencies\ndefine the pipeline's workflow as a directed acyclic graph.\n\nTwo versions of the Tabular Workflow for Wide \\& Deep are available:\n\n- [HyperparameterTuningJob](#hyperparametertuningjob) searches for the best set of hyperparameter values to use for model training.\n- [CustomJob](#customjob) lets you specify the hyperparameter values to use for model training. If you know exactly which hyperparameter values you need, specify them instead of searching for them and save on training resources.\n\nOverview of Wide \\& Deep CustomJob pipeline and components\n----------------------------------------------------------\n\nThe Wide \\& Deep CustomJob pipeline is illustrated by the following diagram:\n\n\u003cbr /\u003e\n\nThe pipeline components are:\n\n1. **feature-transform-engine** : Performs feature engineering. See [Feature Transform Engine](/vertex-ai/docs/tabular-data/tabular-workflows/feature-engineering) for details.\n2. **split-materialized-data** : Split the materialized data into a training set, an evaluation set, and a test set.\n\n \u003cbr /\u003e\n\n Input:\n - Materialized data `materialized_data`.\n\n Output:\n - Materialized training split `materialized_train_split`.\n - Materialized evaluation split `materialized_eval_split`.\n - Materialized test set `materialized_test_split`.\n3. **wide-and-deep-trainer** : Perform model training.\n\n \u003cbr /\u003e\n\n Input:\n - Instance baseline `instance_baseline`.\n - Training schema `training_schema`.\n - Transform output `transform_output`.\n - Materialized train split `materialized_train_split`.\n - Materialized evaluation split `materialized_eval_split`.\n - Materialized test set `materialized_test_split`.\n\n Output:\n - Final model\n4. **automl-tabular-infra-validator**: Validate the trained model by sending a prediction request and checking whether it completes successfully.\n5. **model-upload**: Upload the model from the user's Cloud Storage bucket to Vertex AI as a Vertex AI model.\n6. **condition-run-evaluation-2** : **Optional** . Use the test set to calculate evaluation metrics. Runs only when `run_evaluation` is set to `true`.\n\nOverview of Wide \\& Deep HyperparameterTuningJob pipeline and components\n------------------------------------------------------------------------\n\nThe Wide \\& Deep HyperparameterTuningJob pipeline is illustrated by the following diagram:\n\n\u003cbr /\u003e\n\n1. **feature-transform-engine** : Performs feature engineering. See [Feature Transform Engine](/vertex-ai/docs/tabular-data/tabular-workflows/feature-engineering) for details.\n2. **split-materialized-data** : Split the materialized data into a training set, an evaluation set, and a test set.\n\n \u003cbr /\u003e\n\n Input:\n - Materialized data `materialized_data`.\n\n Output:\n - Materialized training split `materialized_train_split`.\n - Materialized evaluation split `materialized_eval_split`.\n - Materialized test set `materialized_test_split`.\n3. **get-wide-and-deep-study-spec-parameters** : Generate the study spec based on a configuration of the training pipeline. If the user provides values for `study_spec_parameters_override`, use those values to override the study spec values.\n\n \u003cbr /\u003e\n\n Input:\n - Optional override of study spec parameters `study_spec_parameters_override`.\n\n Output:\n - Final list of hyperparameters and their ranges for the hyperparameter tuning job.\n4. **wide-and-deep-hyperparameter-tuning-job** : Perform one or more trials of hyperparameter tuning.\n\n \u003cbr /\u003e\n\n Input:\n - Instance baseline `instance_baseline`.\n - Training schema `training_schema`.\n - Transform output `transform_output`.\n - Materialized train split `materialized_train_split`.\n - Materialized evaluation split `materialized_eval_split`.\n - Materialized test set `materialized_test_split`.\n - List of hyperparameters and their ranges for the hyperparameter tuning job.\n5. **get-best-hyperparameter-tuning-job-trial** : Select the model from the best hyperparameter tuning job trial of the previous step.\n\n \u003cbr /\u003e\n\n Output:\n - Final model\n6. **automl-tabular-infra-validator**: Validate the trained model by sending a prediction request and checking whether it completes successfully.\n7. **model-upload**: Upload the model from the user's Cloud Storage bucket to Vertex AI as a Vertex AI model.\n8. **condition-run-evaluation-2** : **Optional** . Use the test set to calculate evaluation metrics. Runs only when `run_evaluation` is set to `true`.\n\nWhat's next\n-----------\n\n- [Train a model with Wide \\& Deep](/vertex-ai/docs/tabular-data/tabular-workflows/wide-and-deep-train)."]]