Vertex AI Experiments 工具可帮助您跟踪和分析不同的模型架构、超参数和训练环境,继而跟踪实验运行的各个步骤、输入和输出。Vertex AI Experiments 还可以评估您的模型在测试数据集上以及在训练运行期间的总体性能。之后,您便可以根据这些信息,为特定的使用场景选择最适合的模型。
实验运行作业不会产生额外费用。您只需为实验期间使用的资源付费,具体请参阅 Vertex AI 价格。
Google Cloud 控制台提供实验的集中视图、实验运行的横截面视图以及每次运行的详细信息。Python 版 Vertex AI SDK 提供 API 来使用实验、实验运行、实验运行参数、指标和工件。
Vertex AI Experiments 以及 Vertex ML Metadata 提供了查找实验中跟踪的工件的方法,使您可以快速查看工件的沿袭以及运行中的步骤使用和生成的工件。
支持范围
Vertex AI Experiments 支持使用 Vertex AI 自定义训练、Vertex AI Workbench 笔记本、Notebooks 和大多数机器学习框架中的所有 Python 机器学习框架开发模型。对于某些机器学习框架(例如 TensorFlow),Vertex AI Experiments 提供与框架的深度集成,从而实现自动化的用户体验。对于其他机器学习框架,Vertex AI Experiments 提供了一个框架中立的 Python 版 Vertex AI SDK 供您使用。(请参阅适用于 TensorFlow、scikit-learn、PyTorch、XGBoost 的预构建容器。)
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-02。"],[],[],null,["# Introduction to Vertex AI Experiments\n\n| To see an example of getting started with Vertex AI Experiments,\n| run the \"Get started with Vertex AI Experiments\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/experiments/get_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fexperiments%2Fget_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fexperiments%2Fget_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/experiments/get_started_with_vertex_experiments.ipynb)\n\nVertex AI Experiments is a tool that helps you track and analyze different\nmodel architectures, hyperparameters, and training environments,\nletting you track the steps, inputs, and outputs of\nan experiment run. Vertex AI Experiments can also evaluate how your model\nperformed in aggregate,\nagainst test datasets, and during the training run. You can then use this\ninformation to select the best model for your particular use case.\n\nExperiment runs don't incur additional charges. You're only charged for\nresources that you use during your experiment as described in\n[Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing).\n\nTrack steps, inputs, and outputs\n--------------------------------\n\nVertex AI Experiments lets you track:\n\n- steps of an , for example, preprocessing, training,\n- inputs, for example, algorithm, parameters, datasets,\n- outputs of those steps, for example, models, checkpoints, metrics.\n\nYou can then figure out what worked and what didn't, and identify further\navenues for experimentation.\n\nFor user journey examples, check out:\n\n- [Model training](/vertex-ai/docs/experiments/user-journey/uj-model-training)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nAnalyze model performance\n-------------------------\n\nVertex AI Experiments lets you track and evaluate how\nthe model performed in aggregate, against test datasets, and during\nthe training run. This ability helps to understand the performance\ncharacteristics of the models -- how well a particular model works overall,\nwhere it fails, and where the model excels.\n\nFor user journey examples, check out:\n\n- [Compare pipeline runs](/vertex-ai/docs/experiments/user-journey/uj-compare-pipeline-runs)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nCompare model performance\n-------------------------\n\nVertex AI Experiments lets you group and compare multiple models\nacross\n.\nEach model has its own specified parameters, modeling techniques, architectures,\nand input. This approach helps select the best model.\n\nFor user journey examples, check out:\n\n- [Compare pipeline runs](/vertex-ai/docs/experiments/user-journey/uj-compare-pipeline-runs)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nSearch experiments\n------------------\n\nThe Google Cloud console provides a centralized view of experiments,\na cross-sectional view of the experiment runs, and the details for each run.\nThe Vertex AI SDK for Python provides APIs to consume experiments, experiment runs,\nexperiment run parameters, metrics, and artifacts.\n\nVertex AI Experiments, along with\n[Vertex ML Metadata](/vertex-ai/docs/ml-metadata/introduction), provides a way\nto find the artifacts tracked in an experiment. This lets you quickly view the\nartifact's lineage and the artifacts consumed and produced by steps in a run.\n\nScope of support\n----------------\n\nVertex AI Experiments supports development of models using\nVertex AI custom training, Vertex AI Workbench\nnotebooks, Notebooks, and all Python ML Frameworks across most ML Frameworks.\nFor some ML frameworks, such as TensorFlow, Vertex AI Experiments\nprovides deep integrations into the framework that makes the user experience\nautomagical. For other ML frameworks, Vertex AI Experiments provides\na framework neutral Vertex AI SDK for Python that you can use.\n(see: [Prebuilt containers](/vertex-ai/docs/training/pre-built-containers) for\nTensorFlow, scikit-learn, PyTorch, XGBoost).\n\nData models and concepts\n------------------------\n\nVertex AI Experiments is a\nin [Vertex ML Metadata](/vertex-ai/docs/ml-metadata/introduction) where an experiment\ncan contain *n* experiment runs in addition to *n* pipeline runs. An experiment\nrun consists of parameters, summary metrics, time series metrics, and\n[`PipelineJob`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.PipelineJob), [`Artifact`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Artifact),\nand [`Execution`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Execution) Vertex AI resources.\n[Vertex AI TensorBoard](/vertex-ai/docs/experiments/tensorboard-introduction), a\nmanaged version of open source TensorBoard, is used for time-series metrics\nstorage. Executions and of a pipeline run are viewable\nin the [Google Cloud console](/vertex-ai/docs/pipelines/visualize-pipeline#visualize_pipeline_runs_using).\n\nVertex AI Experiments terms\n---------------------------\n\n### Experiment, experiment run, and pipeline run\n\n\n##### **experiment**\n\n- An experiment is a context that can contain a set of n experiment runs in addition to pipeline runs where a user can investigate, as a group, different configurations such as input artifacts or hyperparameters.\n\nSee [Create an experiment](/vertex-ai/docs/experiments/create-experiment).\n\n\u003cbr /\u003e\n\n\n##### **experiment run**\n\n- A specific, trackable execution within a Vertex AI Experiment, which logs inputs (like algorithm, parameters, and datasets) and outputs (like models, checkpoints, and metrics) to monitor and compare ML development iterations. For more information, see [Create and manage experiment runs](https://cloud.google.com/vertex-ai/docs/experiments/create-manage-exp-run).\n\nSee [Create and manage experiment runs](/vertex-ai/docs/experiments/create-manage-exp-run).\n\n\u003cbr /\u003e\n\n\n##### **pipeline run**\n\n- One or more Vertex PipelineJobs can be associated with an experiment where each PipelineJob is represented as a single run. In this context, the parameters of the run are inferred by the parameters of the PipelineJob. The metrics are inferred from the system.Metric artifacts produced by that PipelineJob. The artifacts of the run are inferred from artifacts produced by that PipelineJob.\n\nOne or more Vertex AI [`PipelineJob`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.PipelineJob) resource can be associated with an [`ExperimentRun`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.ExperimentRun) resource. In this context, the parameters, metrics, and artifacts are not inferred.\n\n\u003cbr /\u003e\n\nSee [Associate a pipeline with an experiment](/vertex-ai/docs/experiments/add-pipelinerun-experiment).\n\n### Parameters and metrics\n\n\nSee [Log parameters](/vertex-ai/docs/experiments/log-data#parameters).\n\n\n##### **summary metrics**\n\n- Summary metrics are a single value for each metric key in an experiment run. For example, the test accuracy of an experiment is the accuracy calculated against a test dataset at the end of training that can be captured as a single value summary metric.\n\n\u003cbr /\u003e\n\nSee [Log summary metrics](/vertex-ai/docs/experiments/log-data#summary-metrics).\n\n\n##### **time series metrics**\n\n- Time series metrics are longitudinal metric values where each value represents a step in the training routine portion of a run. Time series metrics are stored in Vertex AI TensorBoard. Vertex AI Experiments stores a reference to the Vertex TensorBoard resource.\n\n\u003cbr /\u003e\n\nSee [Log time series metrics](/vertex-ai/docs/experiments/log-data#time-series-metrics).\n\n### Resource types\n\n\n##### **pipeline job**\n\n- A pipeline job or a pipeline run corresponds to the PipelineJob resource in the Vertex AI API. It's an execution instance of your ML pipeline definition, which is defined as a set of ML tasks interconnected by input-output dependencies.\n\n\u003cbr /\u003e\n\n\n##### **artifact**\n\n- An artifact is a discrete entity or piece of data produced and consumed by a machine learning workflow. Examples of artifacts include datasets, models, input files, and training logs.\n\n\u003cbr /\u003e\n\nVertex AI Experiments lets you use a schema to define the type of\nartifact. For example, supported schema types include `system.Dataset`,\n`system.Model`, and `system.Artifact`. For more information, see\n[System schemas](/vertex-ai/docs/ml-metadata/system-schemas).\n\nNotebook tutorial\n-----------------\n\n- [Get started with Vertex AI Experiments](/vertex-ai/docs/experiments/user-journey/uj-get-started-vertex-ai-experiments)\n\nWhat's next\n-----------\n\n- [Set up to get started with Vertex AI Experiments](/vertex-ai/docs/experiments/setup)"]]