[[["์ดํดํ๊ธฐ ์ฌ์","easyToUnderstand","thumb-up"],["๋ฌธ์ ๊ฐ ํด๊ฒฐ๋จ","solvedMyProblem","thumb-up"],["๊ธฐํ","otherUp","thumb-up"]],[["์ดํดํ๊ธฐ ์ด๋ ค์","hardToUnderstand","thumb-down"],["์๋ชป๋ ์ ๋ณด ๋๋ ์ํ ์ฝ๋","incorrectInformationOrSampleCode","thumb-down"],["ํ์ํ ์ ๋ณด/์ํ์ด ์์","missingTheInformationSamplesINeed","thumb-down"],["๋ฒ์ญ ๋ฌธ์ ","translationIssue","thumb-down"],["๊ธฐํ","otherDown","thumb-down"]],["์ต์ข ์ ๋ฐ์ดํธ: 2025-08-28(UTC)"],[],[],null,["# Paginate a BigQuery result set\n\nIf you are trying to retrieve the results of a query to a BigQuery\ndataset that is larger than the Workflows memory limit, you can\nuse a page token to paginate through the results. The page token represents the\nposition in the result set, and is returned when additional results are\navailable. This allows you to loop through a page of results at a time.\n\nBigQuery hosts a number of [public datasets](/bigquery/public-data)\nthat are available to the general public to query. In the following example, you\nquery the\n[USA Name Data public dataset](https://console.cloud.google.com/bigquery?p=bigquery-public-data&d=usa_names&page=dataset)\nto determine the most common names in the US between 1910 and 2013.\n\n### YAML\n\n # Use a page token to loop through a page of results at a time when\n # querying a BigQuery dataset larger than the Workflows memory limit\n # This workflow queries a public dataset to determine the most common\n # names in the US between 1910 and 2013\n main:\n params: [input]\n steps:\n - init:\n assign:\n - pageToken: null\n - startQuery:\n call: googleapis.bigquery.v2.jobs.insert\n args:\n projectId: ${sys.get_env(\"GOOGLE_CLOUD_PROJECT_ID\")}\n body:\n configuration:\n query:\n useLegacySql: false\n # Remove LIMIT from the query to iterate through all results\n query: SELECT name, SUM(number) AS total FROM `bigquery-public-data.usa_names.usa_1910_2013` GROUP BY name ORDER BY total DESC LIMIT 50\n result: query\n - getPage:\n call: googleapis.bigquery.v2.jobs.getQueryResults\n args:\n projectId: ${sys.get_env(\"GOOGLE_CLOUD_PROJECT_ID\")}\n jobId: ${query.jobReference.jobId}\n maxResults: 10\n pageToken: ${pageToken}\n result: page\n - processPage:\n for:\n value: row\n in: ${page.rows}\n steps:\n - processRow:\n call: sys.log\n args:\n data: ${row}\n - checkIfDone:\n switch:\n - condition: ${\"pageToken\" in page and page.pageToken != \"\"}\n assign:\n - pageToken: ${page.pageToken}\n next: getPage\n\n### JSON\n\n {\n \"main\": {\n \"params\": [\n \"input\"\n ],\n \"steps\": [\n {\n \"init\": {\n \"assign\": [\n {\n \"pageToken\": null\n }\n ]\n }\n },\n {\n \"startQuery\": {\n \"call\": \"googleapis.bigquery.v2.jobs.insert\",\n \"args\": {\n \"projectId\": \"${sys.get_env(\\\"GOOGLE_CLOUD_PROJECT_ID\\\")}\",\n \"body\": {\n \"configuration\": {\n \"query\": {\n \"useLegacySql\": false,\n \"query\": \"SELECT name, SUM(number) AS total FROM `bigquery-public-data.usa_names.usa_1910_2013` GROUP BY name ORDER BY total DESC LIMIT 50\"\n }\n }\n }\n },\n \"result\": \"query\"\n }\n },\n {\n \"getPage\": {\n \"call\": \"googleapis.bigquery.v2.jobs.getQueryResults\",\n \"args\": {\n \"projectId\": \"${sys.get_env(\\\"GOOGLE_CLOUD_PROJECT_ID\\\")}\",\n \"jobId\": \"${query.jobReference.jobId}\",\n \"maxResults\": 10,\n \"pageToken\": \"${pageToken}\"\n },\n \"result\": \"page\"\n }\n },\n {\n \"processPage\": {\n \"for\": {\n \"value\": \"row\",\n \"in\": \"${page.rows}\",\n \"steps\": [\n {\n \"processRow\": {\n \"call\": \"sys.log\",\n \"args\": {\n \"data\": \"${row}\"\n }\n }\n }\n ]\n }\n }\n },\n {\n \"checkIfDone\": {\n \"switch\": [\n {\n \"condition\": \"${\\\"pageToken\\\" in page and page.pageToken != \\\"\\\"}\",\n \"assign\": [\n {\n \"pageToken\": \"${page.pageToken}\"\n }\n ],\n \"next\": \"getPage\"\n }\n ]\n }\n }\n ]\n }\n }\n\nWhat's next\n-----------\n\n- [BigQuery API connector overview](/workflows/docs/reference/googleapis/bigquery/Overview)\n- [Workflows syntax reference](/workflows/docs/reference/syntax)"]]