[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-02 (世界標準時間)。"],[],[],null,["# Interpret prediction results from video action recognition models\n\nAfter requesting a prediction, Vertex AI returns results based on your\nmodel's objective. Predictions from an action recognition model return moments\nof actions, according to your own defined labels. The model assigns a confidence\nscore to each prediction, which communicates how confident your model accurately\nidentified an action. The higher the number - the higher the model's confidence\nis of the correctness of the prediction.\n\n#### Example batch prediction output\n\nThe following sample is the predicted result for a model that identifies\nthe \"swing\" and \"jump\" actions in a video. Each result includes a label\n(\"swing\" or \"jump\") for the identified action, a time segment with the same\nstart and end time that specifies the moment of the action, and a\nconfidence score.\n\n\n| **Note**: The following JSON Lines example includes line breaks for\n| readability. In your JSON Lines files, line breaks are included only after each\n| each JSON object.\n\n\u003cbr /\u003e\n\n\n```\n{\n \"instance\": {\n \"content\": \"gs://bucket/video.mp4\",\n \"mimeType\": \"video/mp4\",\n \"timeSegmentStart\": \"1s\",\n \"timeSegmentEnd\": \"5s\"\n }\n \"prediction\": [{\n \"id\": \"1\",\n \"displayName\": \"swing\",\n \"timeSegmentStart\": \"1.2s\",\n \"timeSegmentEnd\": \"1.2s\",\n \"confidence\": 0.7\n }, {\n \"id\": \"2\",\n \"displayName\": \"jump\",\n \"timeSegmentStart\": \"3.4s\",\n \"timeSegmentEnd\": \"3.4s\",\n \"confidence\": 0.5\n }]\n}\n```\n\n\u003cbr /\u003e"]]