From fe48b9321f3afe6d6a538434cc6d6b3b3828f8cd Mon Sep 17 00:00:00 2001 From: larkee <31196561+larkee@users.noreply.github.com> Date: Tue, 19 Jan 2021 13:48:46 +1100 Subject: [PATCH 01/16] test: unskip list_backup_operations sample (#210) Co-authored-by: larkee --- samples/samples/backup_sample_test.py | 6 ------ 1 file changed, 6 deletions(-) diff --git a/samples/samples/backup_sample_test.py b/samples/samples/backup_sample_test.py index 7a95f1d5cc..8d73c8acf1 100644 --- a/samples/samples/backup_sample_test.py +++ b/samples/samples/backup_sample_test.py @@ -79,12 +79,6 @@ def test_restore_database(capsys): assert BACKUP_ID in out -@pytest.mark.skip( - reason=( - "failing due to a production bug" - "https://github.com/googleapis/python-spanner/issues/149" - ) -) def test_list_backup_operations(capsys, spanner_instance): backup_sample.list_backup_operations(INSTANCE_ID, DATABASE_ID) out, _ = capsys.readouterr() From 283ea3dc4934bde89e0a04469050214a45bcea81 Mon Sep 17 00:00:00 2001 From: WhiteSource Renovate Date: Tue, 19 Jan 2021 04:30:09 +0100 Subject: [PATCH 02/16] chore(deps): update dependency google-cloud-spanner to v3 (#209) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [![WhiteSource Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [google-cloud-spanner](https://togithub.com/googleapis/python-spanner) | major | `==2.1.0` -> `==3.0.0` | --- ### Release Notes
googleapis/python-spanner ### [`v3.0.0`](https://togithub.com/googleapis/python-spanner/blob/master/CHANGELOG.md#​300-httpswwwgithubcomgoogleapispython-spannercomparev210v300-2021-01-15) [Compare Source](https://togithub.com/googleapis/python-spanner/compare/v2.1.0...v3.0.0) ##### ⚠ BREAKING CHANGES - convert operations pbs into Operation objects when listing operations ([#​186](https://togithub.com/googleapis/python-spanner/issues/186)) ##### Features - add support for instance labels ([#​193](https://www.github.com/googleapis/python-spanner/issues/193)) ([ed462b5](https://www.github.com/googleapis/python-spanner/commit/ed462b567a1a33f9105ffb37ba1218f379603614)) - add support for ssl credentials; add throttled field to UpdateDatabaseDdlMetadata ([#​161](https://www.github.com/googleapis/python-spanner/issues/161)) ([2faf01b](https://www.github.com/googleapis/python-spanner/commit/2faf01b135360586ef27c66976646593fd85fd1e)) - adding missing docstrings for functions & classes ([#​188](https://www.github.com/googleapis/python-spanner/issues/188)) ([9788cf8](https://www.github.com/googleapis/python-spanner/commit/9788cf8678d882bd4ccf551f828050cbbb8c8f3a)) - autocommit sample ([#​172](https://www.github.com/googleapis/python-spanner/issues/172)) ([4ef793c](https://www.github.com/googleapis/python-spanner/commit/4ef793c9cd5d6dec6e92faf159665e11d63762ad)) ##### Bug Fixes - convert operations pbs into Operation objects when listing operations ([#​186](https://www.github.com/googleapis/python-spanner/issues/186)) ([ed7152a](https://www.github.com/googleapis/python-spanner/commit/ed7152adc37290c63e59865265f36c593d9b8da3)) - Convert PBs in system test cleanup ([#​199](https://www.github.com/googleapis/python-spanner/issues/199)) ([ede4343](https://www.github.com/googleapis/python-spanner/commit/ede4343e518780a4ab13ae83017480d7046464d6)) - **dbapi:** autocommit enabling fails if no transactions begun ([#​177](https://www.github.com/googleapis/python-spanner/issues/177)) ([e981adb](https://www.github.com/googleapis/python-spanner/commit/e981adb3157bb06e4cb466ca81d74d85da976754)) - **dbapi:** executemany() hiding all the results except the last ([#​181](https://www.github.com/googleapis/python-spanner/issues/181)) ([020dc17](https://www.github.com/googleapis/python-spanner/commit/020dc17c823dfb65bfaacace14d2c9f491c97e11)) - **dbapi:** Spanner protobuf changes causes KeyError's ([#​206](https://www.github.com/googleapis/python-spanner/issues/206)) ([f1e21ed](https://www.github.com/googleapis/python-spanner/commit/f1e21edbf37aab93615fd415d61f829d2574916b)) - remove client side gRPC receive limits ([#​192](https://www.github.com/googleapis/python-spanner/issues/192)) ([90effc4](https://www.github.com/googleapis/python-spanner/commit/90effc4d0f4780b7a7c466169f9fc1e45dab8e7f)) - Rename to fix "Mismatched region tag" check ([#​201](https://www.github.com/googleapis/python-spanner/issues/201)) ([c000ec4](https://www.github.com/googleapis/python-spanner/commit/c000ec4d9b306baa0d5e9ed95f23c0273d9adf32)) ##### Documentation - homogenize region tags ([#​194](https://www.github.com/googleapis/python-spanner/issues/194)) ([1501022](https://www.github.com/googleapis/python-spanner/commit/1501022239dfa8c20290ca0e0cf6a36e9255732c)) - homogenize region tags pt 2 ([#​202](https://www.github.com/googleapis/python-spanner/issues/202)) ([87789c9](https://www.github.com/googleapis/python-spanner/commit/87789c939990794bfd91f5300bedc449fd74bd7e)) - update CHANGELOG breaking change comment ([#​180](https://www.github.com/googleapis/python-spanner/issues/180)) ([c7b3b9e](https://www.github.com/googleapis/python-spanner/commit/c7b3b9e4be29a199618be9d9ffa1d63a9d0f8de7))
--- ### Renovate configuration :date: **Schedule**: At any time (no schedule defined). :vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied. :recycle: **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. :no_bell: **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#github/googleapis/python-spanner). --- samples/samples/requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/samples/requirements.txt b/samples/samples/requirements.txt index 816e298236..42cf4789a7 100644 --- a/samples/samples/requirements.txt +++ b/samples/samples/requirements.txt @@ -1,2 +1,2 @@ -google-cloud-spanner==2.1.0 +google-cloud-spanner==3.0.0 futures==3.3.0; python_version < "3" From bc38664891aa0b8db0ad489ff4ebd099dc82706a Mon Sep 17 00:00:00 2001 From: larkee <31196561+larkee@users.noreply.github.com> Date: Thu, 28 Jan 2021 05:02:30 +1100 Subject: [PATCH 03/16] test(dbapi): add retry to autocommit sample to reduce flakiness (#214) Co-authored-by: larkee --- samples/samples/autocommit_test.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/samples/samples/autocommit_test.py b/samples/samples/autocommit_test.py index c906f060e0..a98744968a 100644 --- a/samples/samples/autocommit_test.py +++ b/samples/samples/autocommit_test.py @@ -6,10 +6,12 @@ import uuid +from google.api_core.exceptions import Aborted from google.cloud import spanner from google.cloud.spanner_dbapi import connect import mock import pytest +from test_utils.retry import RetryErrors import autocommit @@ -49,6 +51,7 @@ def database(spanner_instance): db.drop() +@RetryErrors(exception=Aborted, max_tries=2) def test_enable_autocommit_mode(capsys, database): connection = connect(INSTANCE_ID, DATABASE_ID) cursor = connection.cursor() From be27507c51998e5a4aec54cab57515c4912f5ed5 Mon Sep 17 00:00:00 2001 From: Justin Beckwith Date: Fri, 29 Jan 2021 17:12:05 -0800 Subject: [PATCH 04/16] build: migrate to flakybot (#218) --- .kokoro/test-samples.sh | 8 ++++---- .kokoro/trampoline_v2.sh | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/.kokoro/test-samples.sh b/.kokoro/test-samples.sh index 469771e159..86b7f9d906 100755 --- a/.kokoro/test-samples.sh +++ b/.kokoro/test-samples.sh @@ -87,11 +87,11 @@ for file in samples/**/requirements.txt; do python3.6 -m nox -s "$RUN_TESTS_SESSION" EXIT=$? - # If this is a periodic build, send the test log to the Build Cop Bot. - # See https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop. + # If this is a periodic build, send the test log to the FlakyBot. + # See https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot. if [[ $KOKORO_BUILD_ARTIFACTS_SUBDIR = *"periodic"* ]]; then - chmod +x $KOKORO_GFILE_DIR/linux_amd64/buildcop - $KOKORO_GFILE_DIR/linux_amd64/buildcop + chmod +x $KOKORO_GFILE_DIR/linux_amd64/flakybot + $KOKORO_GFILE_DIR/linux_amd64/flakybot fi if [[ $EXIT -ne 0 ]]; then diff --git a/.kokoro/trampoline_v2.sh b/.kokoro/trampoline_v2.sh index 719bcd5ba8..4af6cdc26d 100755 --- a/.kokoro/trampoline_v2.sh +++ b/.kokoro/trampoline_v2.sh @@ -159,7 +159,7 @@ if [[ -n "${KOKORO_BUILD_ID:-}" ]]; then "KOKORO_GITHUB_COMMIT" "KOKORO_GITHUB_PULL_REQUEST_NUMBER" "KOKORO_GITHUB_PULL_REQUEST_COMMIT" - # For Build Cop Bot + # For FlakyBot "KOKORO_GITHUB_COMMIT_URL" "KOKORO_GITHUB_PULL_REQUEST_URL" ) From d790cfbc6e2d89cb0097f8a4031a2632192551bb Mon Sep 17 00:00:00 2001 From: arithmetic1728 <58957152+arithmetic1728@users.noreply.github.com> Date: Tue, 2 Feb 2021 00:00:02 -0800 Subject: [PATCH 05/16] test: make system test timeout configurable and default to 60 seconds (#217) b/173067462 It seems 30 seconds are too short for mtls test (which uses the system test, and runs on an internal platform). This makes the mtls test very flaky. This PR introduces a `SPANNER_OPERATION_TIMEOUT_IN_SECONDS` env var to make the timeout configurable. The default value is now 60 seconds. --- tests/system/test_system.py | 41 ++++++++++++++++++++++--------- tests/system/test_system_dbapi.py | 13 ++++++++-- 2 files changed, 40 insertions(+), 14 deletions(-) diff --git a/tests/system/test_system.py b/tests/system/test_system.py index 495824044b..90031a3e3a 100644 --- a/tests/system/test_system.py +++ b/tests/system/test_system.py @@ -55,6 +55,9 @@ CREATE_INSTANCE = os.getenv("GOOGLE_CLOUD_TESTS_CREATE_SPANNER_INSTANCE") is not None USE_EMULATOR = os.getenv("SPANNER_EMULATOR_HOST") is not None SKIP_BACKUP_TESTS = os.getenv("SKIP_BACKUP_TESTS") is not None +SPANNER_OPERATION_TIMEOUT_IN_SECONDS = int( + os.getenv("SPANNER_OPERATION_TIMEOUT_IN_SECONDS", 60) +) if CREATE_INSTANCE: INSTANCE_ID = "google-cloud" + unique_resource_id("-") @@ -149,7 +152,9 @@ def setUpModule(): INSTANCE_ID, config_name, labels=labels ) created_op = Config.INSTANCE.create() - created_op.result(30) # block until completion + created_op.result( + SPANNER_OPERATION_TIMEOUT_IN_SECONDS + ) # block until completion else: Config.INSTANCE = Config.CLIENT.instance(INSTANCE_ID) @@ -208,7 +213,9 @@ def test_create_instance(self): self.instances_to_delete.append(instance) # We want to make sure the operation completes. - operation.result(30) # raises on failure / timeout. + operation.result( + SPANNER_OPERATION_TIMEOUT_IN_SECONDS + ) # raises on failure / timeout. # Create a new instance instance and make sure it is the same. instance_alt = Config.CLIENT.instance( @@ -227,7 +234,9 @@ def test_update_instance(self): operation = Config.INSTANCE.update() # We want to make sure the operation completes. - operation.result(30) # raises on failure / timeout. + operation.result( + SPANNER_OPERATION_TIMEOUT_IN_SECONDS + ) # raises on failure / timeout. # Create a new instance instance and reload it. instance_alt = Config.CLIENT.instance(INSTANCE_ID, None) @@ -308,7 +317,9 @@ def setUpClass(cls): cls.DATABASE_NAME, ddl_statements=ddl_statements, pool=pool ) operation = cls._db.create() - operation.result(30) # raises on failure / timeout. + operation.result( + SPANNER_OPERATION_TIMEOUT_IN_SECONDS + ) # raises on failure / timeout. @classmethod def tearDownClass(cls): @@ -337,7 +348,9 @@ def test_create_database(self): self.to_delete.append(temp_db) # We want to make sure the operation completes. - operation.result(30) # raises on failure / timeout. + operation.result( + SPANNER_OPERATION_TIMEOUT_IN_SECONDS + ) # raises on failure / timeout. database_ids = [database.name for database in Config.INSTANCE.list_databases()] self.assertIn(temp_db.name, database_ids) @@ -483,8 +496,8 @@ def setUpClass(cls): cls._dbs = [db1, db2] op1 = db1.create() op2 = db2.create() - op1.result(30) # raises on failure / timeout. - op2.result(30) # raises on failure / timeout. + op1.result(SPANNER_OPERATION_TIMEOUT_IN_SECONDS) # raises on failure / timeout. + op2.result(SPANNER_OPERATION_TIMEOUT_IN_SECONDS) # raises on failure / timeout. current_config = Config.INSTANCE.configuration_name same_config_instance_id = "same-config" + unique_resource_id("-") @@ -494,7 +507,7 @@ def setUpClass(cls): same_config_instance_id, current_config, labels=labels ) op = cls._same_config_instance.create() - op.result(30) + op.result(SPANNER_OPERATION_TIMEOUT_IN_SECONDS) cls._instances = [cls._same_config_instance] retry = RetryErrors(exceptions.ServiceUnavailable) @@ -513,7 +526,7 @@ def setUpClass(cls): diff_config_instance_id, diff_configs[0], labels=labels ) op = cls._diff_config_instance.create() - op.result(30) + op.result(SPANNER_OPERATION_TIMEOUT_IN_SECONDS) cls._instances.append(cls._diff_config_instance) @classmethod @@ -675,7 +688,7 @@ def test_multi_create_cancel_update_error_restore_errors(self): return new_db = self._diff_config_instance.database("diff_config") op = new_db.create() - op.result(30) + op.result(SPANNER_OPERATION_TIMEOUT_IN_SECONDS) self.to_drop.append(new_db) with self.assertRaises(exceptions.InvalidArgument): new_db.restore(source=backup1) @@ -866,7 +879,9 @@ def setUpClass(cls): cls.DATABASE_NAME, ddl_statements=ddl_statements, pool=pool ) operation = cls._db.create() - operation.result(30) # raises on failure / timeout. + operation.result( + SPANNER_OPERATION_TIMEOUT_IN_SECONDS + ) # raises on failure / timeout. @classmethod def tearDownClass(cls): @@ -1788,7 +1803,9 @@ def test_read_w_index(self): self.to_delete.append(_DatabaseDropper(temp_db)) # We want to make sure the operation completes. - operation.result(30) # raises on failure / timeout. + operation.result( + SPANNER_OPERATION_TIMEOUT_IN_SECONDS + ) # raises on failure / timeout. committed = self._set_up_table(row_count, database=temp_db) with temp_db.snapshot(read_timestamp=committed) as snapshot: diff --git a/tests/system/test_system_dbapi.py b/tests/system/test_system_dbapi.py index baeadd2c44..1659fe239b 100644 --- a/tests/system/test_system_dbapi.py +++ b/tests/system/test_system_dbapi.py @@ -39,6 +39,11 @@ ) +SPANNER_OPERATION_TIMEOUT_IN_SECONDS = int( + os.getenv("SPANNER_OPERATION_TIMEOUT_IN_SECONDS", 60) +) + + def setUpModule(): if USE_EMULATOR: from google.auth.credentials import AnonymousCredentials @@ -91,7 +96,9 @@ def setUpModule(): INSTANCE_ID, config_name, labels=labels ) created_op = Config.INSTANCE.create() - created_op.result(30) # block until completion + created_op.result( + SPANNER_OPERATION_TIMEOUT_IN_SECONDS + ) # block until completion else: Config.INSTANCE = Config.CLIENT.instance(INSTANCE_ID) @@ -126,7 +133,9 @@ def setUpClass(cls): ddl_statements=cls.DDL_STATEMENTS, pool=BurstyPool(labels={"testcase": "database_api"}), ) - cls._db.create().result(30) # raises on failure / timeout. + cls._db.create().result( + SPANNER_OPERATION_TIMEOUT_IN_SECONDS + ) # raises on failure / timeout. @classmethod def tearDownClass(cls): From 05c3ad995863074a335a6d1db47c3b3992e36836 Mon Sep 17 00:00:00 2001 From: larkee <31196561+larkee@users.noreply.github.com> Date: Fri, 5 Feb 2021 15:38:04 +1100 Subject: [PATCH 06/16] test: fix credential scope assertions (#223) The assertions for credential scope in the `client` unit tests were broken by [a PR in the auth library](https://github.com/googleapis/google-auth-library-python/pull/665). This does raise the question of whether we should be asserting the scopes like this in this library. This PR fixes the assertions. Removal of these assertions can be done in a separate PR if it is decided they don't belong in this library. --- tests/unit/test_client.py | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/tests/unit/test_client.py b/tests/unit/test_client.py index 9c260c5f95..40d10de9df 100644 --- a/tests/unit/test_client.py +++ b/tests/unit/test_client.py @@ -88,7 +88,9 @@ def _constructor_test_helper( self.assertIs(client._credentials, expected_creds) if expected_scopes is not None: - creds.with_scopes.assert_called_once_with(expected_scopes) + creds.with_scopes.assert_called_once_with( + expected_scopes, default_scopes=None + ) self.assertEqual(client.project, self.PROJECT) self.assertIs(client._client_info, expected_client_info) @@ -235,7 +237,9 @@ def test_instance_admin_api(self, mock_em): credentials=mock.ANY, client_info=client_info, client_options=client_options ) - credentials.with_scopes.assert_called_once_with(expected_scopes) + credentials.with_scopes.assert_called_once_with( + expected_scopes, default_scopes=None + ) @mock.patch("google.cloud.spanner_v1.client._get_spanner_emulator_host") def test_instance_admin_api_emulator_env(self, mock_em): @@ -333,7 +337,9 @@ def test_database_admin_api(self, mock_em): credentials=mock.ANY, client_info=client_info, client_options=client_options ) - credentials.with_scopes.assert_called_once_with(expected_scopes) + credentials.with_scopes.assert_called_once_with( + expected_scopes, default_scopes=None + ) @mock.patch("google.cloud.spanner_v1.client._get_spanner_emulator_host") def test_database_admin_api_emulator_env(self, mock_em): From 1f80a395ff4ac51a785ea6bdfbe3e18a8cf4c951 Mon Sep 17 00:00:00 2001 From: Yoshi Automation Bot Date: Thu, 4 Feb 2021 23:52:02 -0800 Subject: [PATCH 07/16] chore: add support for commit stats and PITR (via synth) (#204) This PR was generated using Autosynth. :rainbow: Synth log will be available here: https://source.cloud.google.com/results/invocations/3b4457c8-4080-407a-9a6d-4a48ddcea154/targets - [ ] To automatically regenerate this PR, check this box. PiperOrigin-RevId: 354996675 Source-Link: https://github.com/googleapis/googleapis/commit/20712b8fe95001b312f62c6c5f33e3e3ec92cfaf PiperOrigin-RevId: 352816749 Source-Link: https://github.com/googleapis/googleapis/commit/ceaaf31b3d13badab7cf9d3b570f5639db5593d9 PiperOrigin-RevId: 350246057 Source-Link: https://github.com/googleapis/googleapis/commit/520682435235d9c503983a360a2090025aa47cd1 --- .../proto/backup.proto | 44 +- .../proto/spanner_database_admin.proto | 31 +- .../services/database_admin/async_client.py | 378 +++++++------- .../services/database_admin/client.py | 461 +++++++++--------- .../services/database_admin/pagers.py | 64 +-- .../database_admin/transports/grpc.py | 23 +- .../database_admin/transports/grpc_asyncio.py | 23 +- .../spanner_admin_database_v1/types/backup.py | 48 +- .../spanner_admin_database_v1/types/common.py | 4 +- .../types/spanner_database_admin.py | 46 +- .../services/instance_admin/async_client.py | 309 ++++++------ .../services/instance_admin/client.py | 370 +++++++------- .../services/instance_admin/pagers.py | 32 +- .../instance_admin/transports/grpc.py | 23 +- .../instance_admin/transports/grpc_asyncio.py | 23 +- .../types/spanner_instance_admin.py | 36 +- google/cloud/spanner_v1/proto/spanner.proto | 46 +- .../services/spanner/async_client.py | 179 ++++--- .../spanner_v1/services/spanner/client.py | 240 +++++---- .../spanner_v1/services/spanner/pagers.py | 16 +- .../services/spanner/transports/grpc.py | 30 +- .../spanner/transports/grpc_asyncio.py | 30 +- google/cloud/spanner_v1/types/keys.py | 12 +- google/cloud/spanner_v1/types/mutation.py | 14 +- google/cloud/spanner_v1/types/query_plan.py | 14 +- google/cloud/spanner_v1/types/result_set.py | 20 +- google/cloud/spanner_v1/types/spanner.py | 111 +++-- google/cloud/spanner_v1/types/transaction.py | 18 +- google/cloud/spanner_v1/types/type.py | 12 +- scripts/fixup_spanner_v1_keywords.py | 2 +- synth.metadata | 9 +- .../test_database_admin.py | 225 +++++---- .../test_instance_admin.py | 213 +++++--- tests/unit/gapic/spanner_v1/test_spanner.py | 212 ++++---- 34 files changed, 1881 insertions(+), 1437 deletions(-) diff --git a/google/cloud/spanner_admin_database_v1/proto/backup.proto b/google/cloud/spanner_admin_database_v1/proto/backup.proto index e33faddddf..a677207f72 100644 --- a/google/cloud/spanner_admin_database_v1/proto/backup.proto +++ b/google/cloud/spanner_admin_database_v1/proto/backup.proto @@ -61,6 +61,12 @@ message Backup { type: "spanner.googleapis.com/Database" }]; + // The backup will contain an externally consistent copy of the database at + // the timestamp specified by `version_time`. If `version_time` is not + // specified, the system will set `version_time` to the `create_time` of the + // backup. + google.protobuf.Timestamp version_time = 9; + // Required for the [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] // operation. The expiration time of the backup, with microseconds // granularity that must be at least 6 hours and at most 366 days @@ -84,10 +90,9 @@ message Backup { // `projects//instances/`. string name = 1; - // Output only. The backup will contain an externally consistent - // copy of the database at the timestamp specified by - // `create_time`. `create_time` is approximately the time the - // [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] request is received. + // Output only. The time the [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] + // request is received. If the request does not specify `version_time`, the + // `version_time` of the backup will be equivalent to the `create_time`. google.protobuf.Timestamp create_time = 4 [(google.api.field_behavior) = OUTPUT_ONLY]; // Output only. Size of the backup in bytes. @@ -134,10 +139,14 @@ message CreateBackupRequest { // [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. message CreateBackupMetadata { // The name of the backup being created. - string name = 1; + string name = 1 [(google.api.resource_reference) = { + type: "spanner.googleapis.com/Backup" + }]; // The name of the database the backup is created from. - string database = 2; + string database = 2 [(google.api.resource_reference) = { + type: "spanner.googleapis.com/Database" + }]; // The progress of the // [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] operation. @@ -311,9 +320,9 @@ message ListBackupOperationsRequest { // * `done:true` - The operation is complete. // * `metadata.database:prod` - The database the backup was taken from has // a name containing the string "prod". - // * `(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata) AND`
- // `(metadata.name:howl) AND`
- // `(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") AND`
+ // * `(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata) AND` \ + // `(metadata.name:howl) AND` \ + // `(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") AND` \ // `(error:*)` - Returns operations where: // * The operation's metadata type is [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. // * The backup name contains the string "howl". @@ -355,12 +364,23 @@ message ListBackupOperationsResponse { // Information about a backup. message BackupInfo { // Name of the backup. - string backup = 1; + string backup = 1 [(google.api.resource_reference) = { + type: "spanner.googleapis.com/Backup" + }]; // The backup contains an externally consistent copy of `source_database` at - // the timestamp specified by `create_time`. + // the timestamp specified by `version_time`. If the + // [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] request did not specify + // `version_time`, the `version_time` of the backup is equivalent to the + // `create_time`. + google.protobuf.Timestamp version_time = 4; + + // The time the [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] request was + // received. google.protobuf.Timestamp create_time = 2; // Name of the database the backup was created from. - string source_database = 3; + string source_database = 3 [(google.api.resource_reference) = { + type: "spanner.googleapis.com/Database" + }]; } diff --git a/google/cloud/spanner_admin_database_v1/proto/spanner_database_admin.proto b/google/cloud/spanner_admin_database_v1/proto/spanner_database_admin.proto index db6192bc02..12e751bd67 100644 --- a/google/cloud/spanner_admin_database_v1/proto/spanner_database_admin.proto +++ b/google/cloud/spanner_admin_database_v1/proto/spanner_database_admin.proto @@ -368,6 +368,17 @@ message Database { // Output only. Applicable only for restored databases. Contains information // about the restore source. RestoreInfo restore_info = 4 [(google.api.field_behavior) = OUTPUT_ONLY]; + + // Output only. The period in which Cloud Spanner retains all versions of data + // for the database. This is the same as the value of version_retention_period + // database option set using + // [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl]. Defaults to 1 hour, + // if not set. + string version_retention_period = 6 [(google.api.field_behavior) = OUTPUT_ONLY]; + + // Output only. Earliest timestamp at which older versions of the data can be + // read. + google.protobuf.Timestamp earliest_version_time = 7 [(google.api.field_behavior) = OUTPUT_ONLY]; } // The request for [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. @@ -535,6 +546,8 @@ message DropDatabaseRequest { // The request for [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. message GetDatabaseDdlRequest { // Required. The database whose schema we wish to get. + // Values are of the form + // `projects//instances//databases/` string database = 1 [ (google.api.field_behavior) = REQUIRED, (google.api.resource_reference) = { @@ -590,11 +603,11 @@ message ListDatabaseOperationsRequest { // Here are a few examples: // // * `done:true` - The operation is complete. - // * `(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.RestoreDatabaseMetadata) AND`
- // `(metadata.source_type:BACKUP) AND`
- // `(metadata.backup_info.backup:backup_howl) AND`
- // `(metadata.name:restored_howl) AND`
- // `(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") AND`
+ // * `(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.RestoreDatabaseMetadata) AND` \ + // `(metadata.source_type:BACKUP) AND` \ + // `(metadata.backup_info.backup:backup_howl) AND` \ + // `(metadata.name:restored_howl) AND` \ + // `(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") AND` \ // `(error:*)` - Return operations where: // * The operation's metadata type is [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata]. // * The database is restored from a backup. @@ -666,7 +679,9 @@ message RestoreDatabaseRequest { // [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. message RestoreDatabaseMetadata { // Name of the database being created and restored to. - string name = 1; + string name = 1 [(google.api.resource_reference) = { + type: "spanner.googleapis.com/Database" + }]; // The type of the restore source. RestoreSourceType source_type = 2; @@ -716,7 +731,9 @@ message RestoreDatabaseMetadata { // completion of a database restore, and cannot be cancelled. message OptimizeRestoredDatabaseMetadata { // Name of the restored database being optimized. - string name = 1; + string name = 1 [(google.api.resource_reference) = { + type: "spanner.googleapis.com/Database" + }]; // The progress of the post-restore optimizations. OperationProgress progress = 2; diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/async_client.py b/google/cloud/spanner_admin_database_v1/services/database_admin/async_client.py index 4f15f2e2c8..f64e8202bf 100644 --- a/google/cloud/spanner_admin_database_v1/services/database_admin/async_client.py +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/async_client.py @@ -96,6 +96,7 @@ class DatabaseAdminAsyncClient: DatabaseAdminClient.parse_common_location_path ) + from_service_account_info = DatabaseAdminClient.from_service_account_info from_service_account_file = DatabaseAdminClient.from_service_account_file from_service_account_json = from_service_account_file @@ -172,13 +173,14 @@ async def list_databases( r"""Lists Cloud Spanner databases. Args: - request (:class:`~.spanner_database_admin.ListDatabasesRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.ListDatabasesRequest`): The request object. The request for [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. parent (:class:`str`): Required. The instance whose databases should be listed. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -190,7 +192,7 @@ async def list_databases( sent along with the request as metadata. Returns: - ~.pagers.ListDatabasesAsyncPager: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabasesAsyncPager: The response for [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. @@ -272,13 +274,14 @@ async def create_database( successful. Args: - request (:class:`~.spanner_database_admin.CreateDatabaseRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.CreateDatabaseRequest`): The request object. The request for [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. parent (:class:`str`): Required. The name of the instance that will serve the new database. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -290,6 +293,7 @@ async def create_database( characters in length. If the database ID is a reserved word or if it contains a hyphen, the database ID must be enclosed in backticks (:literal:`\``). + This corresponds to the ``create_statement`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -301,12 +305,12 @@ async def create_database( sent along with the request as metadata. Returns: - ~.operation_async.AsyncOperation: + google.api_core.operation_async.AsyncOperation: An object representing a long-running operation. The result type for the operation will be - :class:``~.spanner_database_admin.Database``: A Cloud - Spanner database. + :class:`google.cloud.spanner_admin_database_v1.types.Database` + A Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -369,13 +373,14 @@ async def get_database( r"""Gets the state of a Cloud Spanner database. Args: - request (:class:`~.spanner_database_admin.GetDatabaseRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.GetDatabaseRequest`): The request object. The request for [GetDatabase][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase]. name (:class:`str`): Required. The name of the requested database. Values are of the form ``projects//instances//databases/``. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -387,7 +392,7 @@ async def get_database( sent along with the request as metadata. Returns: - ~.spanner_database_admin.Database: + google.cloud.spanner_admin_database_v1.types.Database: A Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -457,7 +462,7 @@ async def update_database_ddl( The operation has no response. Args: - request (:class:`~.spanner_database_admin.UpdateDatabaseDdlRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.UpdateDatabaseDdlRequest`): The request object. Enqueues the given DDL statements to be applied, in order but not necessarily all at once, to the database schema at some point (or points) in the @@ -485,6 +490,7 @@ async def update_database_ddl( statements (:class:`Sequence[str]`): Required. DDL statements to be applied to the database. + This corresponds to the ``statements`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -496,24 +502,22 @@ async def update_database_ddl( sent along with the request as metadata. Returns: - ~.operation_async.AsyncOperation: + google.api_core.operation_async.AsyncOperation: An object representing a long-running operation. - The result type for the operation will be - :class:``~.empty.Empty``: A generic empty message that - you can re-use to avoid defining duplicated empty - messages in your APIs. A typical example is to use it as - the request or the response type of an API method. For - instance: + The result type for the operation will be :class:`google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated + empty messages in your APIs. A typical example is to + use it as the request or the response type of an API + method. For instance: - :: + service Foo { + rpc Bar(google.protobuf.Empty) returns + (google.protobuf.Empty); - service Foo { - rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); - } + } - The JSON representation for ``Empty`` is empty JSON - object ``{}``. + The JSON representation for Empty is empty JSON + object {}. """ # Create or coerce a protobuf request object. @@ -587,7 +591,7 @@ async def drop_database( ``expire_time``. Args: - request (:class:`~.spanner_database_admin.DropDatabaseRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.DropDatabaseRequest`): The request object. The request for [DropDatabase][google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase]. database (:class:`str`): @@ -662,12 +666,14 @@ async def get_database_ddl( [Operations][google.longrunning.Operations] API. Args: - request (:class:`~.spanner_database_admin.GetDatabaseDdlRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlRequest`): The request object. The request for [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. database (:class:`str`): - Required. The database whose schema - we wish to get. + Required. The database whose schema we wish to get. + Values are of the form + ``projects//instances//databases/`` + This corresponds to the ``database`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -679,7 +685,7 @@ async def get_database_ddl( sent along with the request as metadata. Returns: - ~.spanner_database_admin.GetDatabaseDdlResponse: + google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlResponse: The response for [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. @@ -750,7 +756,7 @@ async def set_iam_policy( [resource][google.iam.v1.SetIamPolicyRequest.resource]. Args: - request (:class:`~.iam_policy.SetIamPolicyRequest`): + request (:class:`google.iam.v1.iam_policy_pb2.SetIamPolicyRequest`): The request object. Request message for `SetIamPolicy` method. resource (:class:`str`): @@ -758,6 +764,7 @@ async def set_iam_policy( policy is being specified. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -769,72 +776,62 @@ async def set_iam_policy( sent along with the request as metadata. Returns: - ~.policy.Policy: - Defines an Identity and Access Management (IAM) policy. - It is used to specify access control policies for Cloud - Platform resources. - - A ``Policy`` is a collection of ``bindings``. A - ``binding`` binds one or more ``members`` to a single - ``role``. Members can be user accounts, service - accounts, Google groups, and domains (such as G Suite). - A ``role`` is a named list of permissions (defined by - IAM or configured by users). A ``binding`` can - optionally specify a ``condition``, which is a logic - expression that further constrains the role binding - based on attributes about the request and/or target - resource. - - **JSON Example** - - :: - - { - "bindings": [ - { - "role": "roles/resourcemanager.organizationAdmin", - "members": [ - "user:mike@example.com", - "group:admins@example.com", - "domain:google.com", - "serviceAccount:my-project-id@appspot.gserviceaccount.com" - ] - }, - { - "role": "roles/resourcemanager.organizationViewer", - "members": ["user:eve@example.com"], - "condition": { - "title": "expirable access", - "description": "Does not grant access after Sep 2020", - "expression": "request.time < - timestamp('2020-10-01T00:00:00.000Z')", - } - } - ] - } - - **YAML Example** - - :: - - bindings: - - members: - - user:mike@example.com - - group:admins@example.com - - domain:google.com - - serviceAccount:my-project-id@appspot.gserviceaccount.com - role: roles/resourcemanager.organizationAdmin - - members: - - user:eve@example.com - role: roles/resourcemanager.organizationViewer - condition: - title: expirable access - description: Does not grant access after Sep 2020 - expression: request.time < timestamp('2020-10-01T00:00:00.000Z') - - For a description of IAM and its features, see the `IAM - developer's - guide `__. + google.iam.v1.policy_pb2.Policy: + Defines an Identity and Access Management (IAM) policy. It is used to + specify access control policies for Cloud Platform + resources. + + A Policy is a collection of bindings. A binding binds + one or more members to a single role. Members can be + user accounts, service accounts, Google groups, and + domains (such as G Suite). A role is a named list of + permissions (defined by IAM or configured by users). + A binding can optionally specify a condition, which + is a logic expression that further constrains the + role binding based on attributes about the request + and/or target resource. + + **JSON Example** + + { + "bindings": [ + { + "role": + "roles/resourcemanager.organizationAdmin", + "members": [ "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + + }, { "role": + "roles/resourcemanager.organizationViewer", + "members": ["user:eve@example.com"], + "condition": { "title": "expirable access", + "description": "Does not grant access after + Sep 2020", "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", } } + + ] + + } + + **YAML Example** + + bindings: - members: - user:\ mike@example.com - + group:\ admins@example.com - domain:google.com - + serviceAccount:\ my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin - + members: - user:\ eve@example.com role: + roles/resourcemanager.organizationViewer + condition: title: expirable access description: + Does not grant access after Sep 2020 expression: + request.time < + timestamp('2020-10-01T00:00:00.000Z') + + For a description of IAM and its features, see the + [IAM developer's + guide](\ https://cloud.google.com/iam/docs). """ # Create or coerce a protobuf request object. @@ -896,7 +893,7 @@ async def get_iam_policy( [resource][google.iam.v1.GetIamPolicyRequest.resource]. Args: - request (:class:`~.iam_policy.GetIamPolicyRequest`): + request (:class:`google.iam.v1.iam_policy_pb2.GetIamPolicyRequest`): The request object. Request message for `GetIamPolicy` method. resource (:class:`str`): @@ -904,6 +901,7 @@ async def get_iam_policy( policy is being requested. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -915,72 +913,62 @@ async def get_iam_policy( sent along with the request as metadata. Returns: - ~.policy.Policy: - Defines an Identity and Access Management (IAM) policy. - It is used to specify access control policies for Cloud - Platform resources. - - A ``Policy`` is a collection of ``bindings``. A - ``binding`` binds one or more ``members`` to a single - ``role``. Members can be user accounts, service - accounts, Google groups, and domains (such as G Suite). - A ``role`` is a named list of permissions (defined by - IAM or configured by users). A ``binding`` can - optionally specify a ``condition``, which is a logic - expression that further constrains the role binding - based on attributes about the request and/or target - resource. - - **JSON Example** - - :: - - { - "bindings": [ - { - "role": "roles/resourcemanager.organizationAdmin", - "members": [ - "user:mike@example.com", - "group:admins@example.com", - "domain:google.com", - "serviceAccount:my-project-id@appspot.gserviceaccount.com" - ] - }, - { - "role": "roles/resourcemanager.organizationViewer", - "members": ["user:eve@example.com"], - "condition": { - "title": "expirable access", - "description": "Does not grant access after Sep 2020", - "expression": "request.time < - timestamp('2020-10-01T00:00:00.000Z')", - } - } - ] - } - - **YAML Example** - - :: - - bindings: - - members: - - user:mike@example.com - - group:admins@example.com - - domain:google.com - - serviceAccount:my-project-id@appspot.gserviceaccount.com - role: roles/resourcemanager.organizationAdmin - - members: - - user:eve@example.com - role: roles/resourcemanager.organizationViewer - condition: - title: expirable access - description: Does not grant access after Sep 2020 - expression: request.time < timestamp('2020-10-01T00:00:00.000Z') - - For a description of IAM and its features, see the `IAM - developer's - guide `__. + google.iam.v1.policy_pb2.Policy: + Defines an Identity and Access Management (IAM) policy. It is used to + specify access control policies for Cloud Platform + resources. + + A Policy is a collection of bindings. A binding binds + one or more members to a single role. Members can be + user accounts, service accounts, Google groups, and + domains (such as G Suite). A role is a named list of + permissions (defined by IAM or configured by users). + A binding can optionally specify a condition, which + is a logic expression that further constrains the + role binding based on attributes about the request + and/or target resource. + + **JSON Example** + + { + "bindings": [ + { + "role": + "roles/resourcemanager.organizationAdmin", + "members": [ "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + + }, { "role": + "roles/resourcemanager.organizationViewer", + "members": ["user:eve@example.com"], + "condition": { "title": "expirable access", + "description": "Does not grant access after + Sep 2020", "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", } } + + ] + + } + + **YAML Example** + + bindings: - members: - user:\ mike@example.com - + group:\ admins@example.com - domain:google.com - + serviceAccount:\ my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin - + members: - user:\ eve@example.com role: + roles/resourcemanager.organizationViewer + condition: title: expirable access description: + Does not grant access after Sep 2020 expression: + request.time < + timestamp('2020-10-01T00:00:00.000Z') + + For a description of IAM and its features, see the + [IAM developer's + guide](\ https://cloud.google.com/iam/docs). """ # Create or coerce a protobuf request object. @@ -1051,7 +1039,7 @@ async def test_iam_permissions( permission on the containing instance. Args: - request (:class:`~.iam_policy.TestIamPermissionsRequest`): + request (:class:`google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest`): The request object. Request message for `TestIamPermissions` method. resource (:class:`str`): @@ -1059,6 +1047,7 @@ async def test_iam_permissions( policy detail is being requested. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1067,6 +1056,7 @@ async def test_iam_permissions( Permissions with wildcards (such as '*' or 'storage.*') are not allowed. For more information see `IAM Overview `__. + This corresponds to the ``permissions`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1078,8 +1068,8 @@ async def test_iam_permissions( sent along with the request as metadata. Returns: - ~.iam_policy.TestIamPermissionsResponse: - Response message for ``TestIamPermissions`` method. + google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse: + Response message for TestIamPermissions method. """ # Create or coerce a protobuf request object. # Sanity check: If we got a request object, we should *not* have @@ -1147,7 +1137,7 @@ async def create_backup( databases can run concurrently. Args: - request (:class:`~.gsad_backup.CreateBackupRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.CreateBackupRequest`): The request object. The request for [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. parent (:class:`str`): @@ -1158,10 +1148,11 @@ async def create_backup( in the instance configuration of this instance. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - backup (:class:`~.gsad_backup.Backup`): + backup (:class:`google.cloud.spanner_admin_database_v1.types.Backup`): Required. The backup to create. This corresponds to the ``backup`` field on the ``request`` instance; if ``request`` is provided, this @@ -1171,6 +1162,7 @@ async def create_backup( ``backup_id`` appended to ``parent`` forms the full backup name of the form ``projects//instances//backups/``. + This corresponds to the ``backup_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1182,12 +1174,12 @@ async def create_backup( sent along with the request as metadata. Returns: - ~.operation_async.AsyncOperation: + google.api_core.operation_async.AsyncOperation: An object representing a long-running operation. The result type for the operation will be - :class:``~.gsad_backup.Backup``: A backup of a Cloud - Spanner database. + :class:`google.cloud.spanner_admin_database_v1.types.Backup` + A backup of a Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -1253,12 +1245,13 @@ async def get_backup( [Backup][google.spanner.admin.database.v1.Backup]. Args: - request (:class:`~.backup.GetBackupRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.GetBackupRequest`): The request object. The request for [GetBackup][google.spanner.admin.database.v1.DatabaseAdmin.GetBackup]. name (:class:`str`): Required. Name of the backup. Values are of the form ``projects//instances//backups/``. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1270,7 +1263,7 @@ async def get_backup( sent along with the request as metadata. Returns: - ~.backup.Backup: + google.cloud.spanner_admin_database_v1.types.Backup: A backup of a Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -1333,10 +1326,10 @@ async def update_backup( [Backup][google.spanner.admin.database.v1.Backup]. Args: - request (:class:`~.gsad_backup.UpdateBackupRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.UpdateBackupRequest`): The request object. The request for [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup]. - backup (:class:`~.gsad_backup.Backup`): + backup (:class:`google.cloud.spanner_admin_database_v1.types.Backup`): Required. The backup to update. ``backup.name``, and the fields to be updated as specified by ``update_mask`` are required. Other fields are ignored. Update is only @@ -1344,10 +1337,11 @@ async def update_backup( - ``backup.expire_time``. + This corresponds to the ``backup`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - update_mask (:class:`~.field_mask.FieldMask`): + update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`): Required. A mask specifying which fields (e.g. ``expire_time``) in the Backup resource should be updated. This mask is relative to the Backup resource, @@ -1355,6 +1349,7 @@ async def update_backup( be specified; this prevents any future fields from being erased accidentally by clients that do not know about them. + This corresponds to the ``update_mask`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1366,7 +1361,7 @@ async def update_backup( sent along with the request as metadata. Returns: - ~.gsad_backup.Backup: + google.cloud.spanner_admin_database_v1.types.Backup: A backup of a Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -1432,13 +1427,14 @@ async def delete_backup( [Backup][google.spanner.admin.database.v1.Backup]. Args: - request (:class:`~.backup.DeleteBackupRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.DeleteBackupRequest`): The request object. The request for [DeleteBackup][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackup]. name (:class:`str`): Required. Name of the backup to delete. Values are of the form ``projects//instances//backups/``. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1508,12 +1504,13 @@ async def list_backups( the most recent ``create_time``. Args: - request (:class:`~.backup.ListBackupsRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.ListBackupsRequest`): The request object. The request for [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. parent (:class:`str`): Required. The instance to list backups from. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1525,7 +1522,7 @@ async def list_backups( sent along with the request as metadata. Returns: - ~.pagers.ListBackupsAsyncPager: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupsAsyncPager: The response for [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. @@ -1617,7 +1614,7 @@ async def restore_database( first restore to complete. Args: - request (:class:`~.spanner_database_admin.RestoreDatabaseRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.RestoreDatabaseRequest`): The request object. The request for [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. parent (:class:`str`): @@ -1626,6 +1623,7 @@ async def restore_database( project and have the same instance configuration as the instance containing the source backup. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1635,6 +1633,7 @@ async def restore_database( ``database_id`` appended to ``parent`` forms the full database name of the form ``projects//instances//databases/``. + This corresponds to the ``database_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1642,6 +1641,7 @@ async def restore_database( Name of the backup from which to restore. Values are of the form ``projects//instances//backups/``. + This corresponds to the ``backup`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1653,12 +1653,12 @@ async def restore_database( sent along with the request as metadata. Returns: - ~.operation_async.AsyncOperation: + google.api_core.operation_async.AsyncOperation: An object representing a long-running operation. The result type for the operation will be - :class:``~.spanner_database_admin.Database``: A Cloud - Spanner database. + :class:`google.cloud.spanner_admin_database_v1.types.Database` + A Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -1732,13 +1732,14 @@ async def list_database_operations( operations. Args: - request (:class:`~.spanner_database_admin.ListDatabaseOperationsRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsRequest`): The request object. The request for [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. parent (:class:`str`): Required. The instance of the database operations. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1750,9 +1751,9 @@ async def list_database_operations( sent along with the request as metadata. Returns: - ~.pagers.ListDatabaseOperationsAsyncPager: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabaseOperationsAsyncPager: The response for - [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. Iterating over this object will yield results and resolve additional pages automatically. @@ -1833,13 +1834,14 @@ async def list_backup_operations( order starting from the most recently started operation. Args: - request (:class:`~.backup.ListBackupOperationsRequest`): + request (:class:`google.cloud.spanner_admin_database_v1.types.ListBackupOperationsRequest`): The request object. The request for [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. parent (:class:`str`): Required. The instance of the backup operations. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1851,9 +1853,9 @@ async def list_backup_operations( sent along with the request as metadata. Returns: - ~.pagers.ListBackupOperationsAsyncPager: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupOperationsAsyncPager: The response for - [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. Iterating over this object will yield results and resolve additional pages automatically. diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/client.py b/google/cloud/spanner_admin_database_v1/services/database_admin/client.py index 3edfd9c9ed..8deca17c5d 100644 --- a/google/cloud/spanner_admin_database_v1/services/database_admin/client.py +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/client.py @@ -124,6 +124,22 @@ def _get_default_mtls_endpoint(api_endpoint): DEFAULT_ENDPOINT ) + @classmethod + def from_service_account_info(cls, info: dict, *args, **kwargs): + """Creates an instance of this client using the provided credentials info. + + Args: + info (dict): The service account private key info. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + DatabaseAdminClient: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_info(info) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + @classmethod def from_service_account_file(cls, filename: str, *args, **kwargs): """Creates an instance of this client using the provided credentials @@ -136,7 +152,7 @@ def from_service_account_file(cls, filename: str, *args, **kwargs): kwargs: Additional arguments to pass to the constructor. Returns: - {@api.name}: The constructed client. + DatabaseAdminClient: The constructed client. """ credentials = service_account.Credentials.from_service_account_file(filename) kwargs["credentials"] = credentials @@ -273,10 +289,10 @@ def __init__( credentials identify the application to the service; if none are specified, the client will attempt to ascertain the credentials from the environment. - transport (Union[str, ~.DatabaseAdminTransport]): The + transport (Union[str, DatabaseAdminTransport]): The transport to use. If set to None, a transport is chosen automatically. - client_options (client_options_lib.ClientOptions): Custom options for the + client_options (google.api_core.client_options.ClientOptions): Custom options for the client. It won't take effect if a ``transport`` instance is provided. (1) The ``api_endpoint`` property can be used to override the default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT @@ -312,21 +328,17 @@ def __init__( util.strtobool(os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false")) ) - ssl_credentials = None + client_cert_source_func = None is_mtls = False if use_client_cert: if client_options.client_cert_source: - import grpc # type: ignore - - cert, key = client_options.client_cert_source() - ssl_credentials = grpc.ssl_channel_credentials( - certificate_chain=cert, private_key=key - ) is_mtls = True + client_cert_source_func = client_options.client_cert_source else: - creds = SslCredentials() - is_mtls = creds.is_mtls - ssl_credentials = creds.ssl_credentials if is_mtls else None + is_mtls = mtls.has_default_client_cert_source() + client_cert_source_func = ( + mtls.default_client_cert_source() if is_mtls else None + ) # Figure out which api endpoint to use. if client_options.api_endpoint is not None: @@ -369,7 +381,7 @@ def __init__( credentials_file=client_options.credentials_file, host=api_endpoint, scopes=client_options.scopes, - ssl_channel_credentials=ssl_credentials, + client_cert_source_for_mtls=client_cert_source_func, quota_project_id=client_options.quota_project_id, client_info=client_info, ) @@ -386,13 +398,14 @@ def list_databases( r"""Lists Cloud Spanner databases. Args: - request (:class:`~.spanner_database_admin.ListDatabasesRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListDatabasesRequest): The request object. The request for [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. - parent (:class:`str`): + parent (str): Required. The instance whose databases should be listed. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -404,7 +417,7 @@ def list_databases( sent along with the request as metadata. Returns: - ~.pagers.ListDatabasesPager: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabasesPager: The response for [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. @@ -479,17 +492,18 @@ def create_database( successful. Args: - request (:class:`~.spanner_database_admin.CreateDatabaseRequest`): + request (google.cloud.spanner_admin_database_v1.types.CreateDatabaseRequest): The request object. The request for [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. - parent (:class:`str`): + parent (str): Required. The name of the instance that will serve the new database. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - create_statement (:class:`str`): + create_statement (str): Required. A ``CREATE DATABASE`` statement, which specifies the ID of the new database. The database ID must conform to the regular expression @@ -497,6 +511,7 @@ def create_database( characters in length. If the database ID is a reserved word or if it contains a hyphen, the database ID must be enclosed in backticks (:literal:`\``). + This corresponds to the ``create_statement`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -508,12 +523,12 @@ def create_database( sent along with the request as metadata. Returns: - ~.operation.Operation: + google.api_core.operation.Operation: An object representing a long-running operation. The result type for the operation will be - :class:``~.spanner_database_admin.Database``: A Cloud - Spanner database. + :class:`google.cloud.spanner_admin_database_v1.types.Database` + A Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -577,13 +592,14 @@ def get_database( r"""Gets the state of a Cloud Spanner database. Args: - request (:class:`~.spanner_database_admin.GetDatabaseRequest`): + request (google.cloud.spanner_admin_database_v1.types.GetDatabaseRequest): The request object. The request for [GetDatabase][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase]. - name (:class:`str`): + name (str): Required. The name of the requested database. Values are of the form ``projects//instances//databases/``. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -595,7 +611,7 @@ def get_database( sent along with the request as metadata. Returns: - ~.spanner_database_admin.Database: + google.cloud.spanner_admin_database_v1.types.Database: A Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -658,7 +674,7 @@ def update_database_ddl( The operation has no response. Args: - request (:class:`~.spanner_database_admin.UpdateDatabaseDdlRequest`): + request (google.cloud.spanner_admin_database_v1.types.UpdateDatabaseDdlRequest): The request object. Enqueues the given DDL statements to be applied, in order but not necessarily all at once, to the database schema at some point (or points) in the @@ -678,14 +694,15 @@ def update_database_ddl( monitor progress. See the [operation_id][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.operation_id] field for more details. - database (:class:`str`): + database (str): Required. The database to update. This corresponds to the ``database`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - statements (:class:`Sequence[str]`): + statements (Sequence[str]): Required. DDL statements to be applied to the database. + This corresponds to the ``statements`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -697,24 +714,22 @@ def update_database_ddl( sent along with the request as metadata. Returns: - ~.operation.Operation: + google.api_core.operation.Operation: An object representing a long-running operation. - The result type for the operation will be - :class:``~.empty.Empty``: A generic empty message that - you can re-use to avoid defining duplicated empty - messages in your APIs. A typical example is to use it as - the request or the response type of an API method. For - instance: + The result type for the operation will be :class:`google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated + empty messages in your APIs. A typical example is to + use it as the request or the response type of an API + method. For instance: - :: + service Foo { + rpc Bar(google.protobuf.Empty) returns + (google.protobuf.Empty); - service Foo { - rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); - } + } - The JSON representation for ``Empty`` is empty JSON - object ``{}``. + The JSON representation for Empty is empty JSON + object {}. """ # Create or coerce a protobuf request object. @@ -781,10 +796,10 @@ def drop_database( ``expire_time``. Args: - request (:class:`~.spanner_database_admin.DropDatabaseRequest`): + request (google.cloud.spanner_admin_database_v1.types.DropDatabaseRequest): The request object. The request for [DropDatabase][google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase]. - database (:class:`str`): + database (str): Required. The database to be dropped. This corresponds to the ``database`` field on the ``request`` instance; if ``request`` is provided, this @@ -849,12 +864,14 @@ def get_database_ddl( [Operations][google.longrunning.Operations] API. Args: - request (:class:`~.spanner_database_admin.GetDatabaseDdlRequest`): + request (google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlRequest): The request object. The request for [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. - database (:class:`str`): - Required. The database whose schema - we wish to get. + database (str): + Required. The database whose schema we wish to get. + Values are of the form + ``projects//instances//databases/`` + This corresponds to the ``database`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -866,7 +883,7 @@ def get_database_ddl( sent along with the request as metadata. Returns: - ~.spanner_database_admin.GetDatabaseDdlResponse: + google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlResponse: The response for [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. @@ -930,14 +947,15 @@ def set_iam_policy( [resource][google.iam.v1.SetIamPolicyRequest.resource]. Args: - request (:class:`~.iam_policy.SetIamPolicyRequest`): + request (google.iam.v1.iam_policy_pb2.SetIamPolicyRequest): The request object. Request message for `SetIamPolicy` method. - resource (:class:`str`): + resource (str): REQUIRED: The resource for which the policy is being specified. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -949,72 +967,62 @@ def set_iam_policy( sent along with the request as metadata. Returns: - ~.policy.Policy: - Defines an Identity and Access Management (IAM) policy. - It is used to specify access control policies for Cloud - Platform resources. - - A ``Policy`` is a collection of ``bindings``. A - ``binding`` binds one or more ``members`` to a single - ``role``. Members can be user accounts, service - accounts, Google groups, and domains (such as G Suite). - A ``role`` is a named list of permissions (defined by - IAM or configured by users). A ``binding`` can - optionally specify a ``condition``, which is a logic - expression that further constrains the role binding - based on attributes about the request and/or target - resource. - - **JSON Example** - - :: - - { - "bindings": [ - { - "role": "roles/resourcemanager.organizationAdmin", - "members": [ - "user:mike@example.com", - "group:admins@example.com", - "domain:google.com", - "serviceAccount:my-project-id@appspot.gserviceaccount.com" - ] - }, - { - "role": "roles/resourcemanager.organizationViewer", - "members": ["user:eve@example.com"], - "condition": { - "title": "expirable access", - "description": "Does not grant access after Sep 2020", - "expression": "request.time < - timestamp('2020-10-01T00:00:00.000Z')", - } - } - ] - } - - **YAML Example** - - :: - - bindings: - - members: - - user:mike@example.com - - group:admins@example.com - - domain:google.com - - serviceAccount:my-project-id@appspot.gserviceaccount.com - role: roles/resourcemanager.organizationAdmin - - members: - - user:eve@example.com - role: roles/resourcemanager.organizationViewer - condition: - title: expirable access - description: Does not grant access after Sep 2020 - expression: request.time < timestamp('2020-10-01T00:00:00.000Z') - - For a description of IAM and its features, see the `IAM - developer's - guide `__. + google.iam.v1.policy_pb2.Policy: + Defines an Identity and Access Management (IAM) policy. It is used to + specify access control policies for Cloud Platform + resources. + + A Policy is a collection of bindings. A binding binds + one or more members to a single role. Members can be + user accounts, service accounts, Google groups, and + domains (such as G Suite). A role is a named list of + permissions (defined by IAM or configured by users). + A binding can optionally specify a condition, which + is a logic expression that further constrains the + role binding based on attributes about the request + and/or target resource. + + **JSON Example** + + { + "bindings": [ + { + "role": + "roles/resourcemanager.organizationAdmin", + "members": [ "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + + }, { "role": + "roles/resourcemanager.organizationViewer", + "members": ["user:eve@example.com"], + "condition": { "title": "expirable access", + "description": "Does not grant access after + Sep 2020", "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", } } + + ] + + } + + **YAML Example** + + bindings: - members: - user:\ mike@example.com - + group:\ admins@example.com - domain:google.com - + serviceAccount:\ my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin - + members: - user:\ eve@example.com role: + roles/resourcemanager.organizationViewer + condition: title: expirable access description: + Does not grant access after Sep 2020 expression: + request.time < + timestamp('2020-10-01T00:00:00.000Z') + + For a description of IAM and its features, see the + [IAM developer's + guide](\ https://cloud.google.com/iam/docs). """ # Create or coerce a protobuf request object. @@ -1072,14 +1080,15 @@ def get_iam_policy( [resource][google.iam.v1.GetIamPolicyRequest.resource]. Args: - request (:class:`~.iam_policy.GetIamPolicyRequest`): + request (google.iam.v1.iam_policy_pb2.GetIamPolicyRequest): The request object. Request message for `GetIamPolicy` method. - resource (:class:`str`): + resource (str): REQUIRED: The resource for which the policy is being requested. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1091,72 +1100,62 @@ def get_iam_policy( sent along with the request as metadata. Returns: - ~.policy.Policy: - Defines an Identity and Access Management (IAM) policy. - It is used to specify access control policies for Cloud - Platform resources. - - A ``Policy`` is a collection of ``bindings``. A - ``binding`` binds one or more ``members`` to a single - ``role``. Members can be user accounts, service - accounts, Google groups, and domains (such as G Suite). - A ``role`` is a named list of permissions (defined by - IAM or configured by users). A ``binding`` can - optionally specify a ``condition``, which is a logic - expression that further constrains the role binding - based on attributes about the request and/or target - resource. - - **JSON Example** - - :: - - { - "bindings": [ - { - "role": "roles/resourcemanager.organizationAdmin", - "members": [ - "user:mike@example.com", - "group:admins@example.com", - "domain:google.com", - "serviceAccount:my-project-id@appspot.gserviceaccount.com" - ] - }, - { - "role": "roles/resourcemanager.organizationViewer", - "members": ["user:eve@example.com"], - "condition": { - "title": "expirable access", - "description": "Does not grant access after Sep 2020", - "expression": "request.time < - timestamp('2020-10-01T00:00:00.000Z')", - } - } - ] - } - - **YAML Example** - - :: - - bindings: - - members: - - user:mike@example.com - - group:admins@example.com - - domain:google.com - - serviceAccount:my-project-id@appspot.gserviceaccount.com - role: roles/resourcemanager.organizationAdmin - - members: - - user:eve@example.com - role: roles/resourcemanager.organizationViewer - condition: - title: expirable access - description: Does not grant access after Sep 2020 - expression: request.time < timestamp('2020-10-01T00:00:00.000Z') - - For a description of IAM and its features, see the `IAM - developer's - guide `__. + google.iam.v1.policy_pb2.Policy: + Defines an Identity and Access Management (IAM) policy. It is used to + specify access control policies for Cloud Platform + resources. + + A Policy is a collection of bindings. A binding binds + one or more members to a single role. Members can be + user accounts, service accounts, Google groups, and + domains (such as G Suite). A role is a named list of + permissions (defined by IAM or configured by users). + A binding can optionally specify a condition, which + is a logic expression that further constrains the + role binding based on attributes about the request + and/or target resource. + + **JSON Example** + + { + "bindings": [ + { + "role": + "roles/resourcemanager.organizationAdmin", + "members": [ "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + + }, { "role": + "roles/resourcemanager.organizationViewer", + "members": ["user:eve@example.com"], + "condition": { "title": "expirable access", + "description": "Does not grant access after + Sep 2020", "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", } } + + ] + + } + + **YAML Example** + + bindings: - members: - user:\ mike@example.com - + group:\ admins@example.com - domain:google.com - + serviceAccount:\ my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin - + members: - user:\ eve@example.com role: + roles/resourcemanager.organizationViewer + condition: title: expirable access description: + Does not grant access after Sep 2020 expression: + request.time < + timestamp('2020-10-01T00:00:00.000Z') + + For a description of IAM and its features, see the + [IAM developer's + guide](\ https://cloud.google.com/iam/docs). """ # Create or coerce a protobuf request object. @@ -1215,22 +1214,24 @@ def test_iam_permissions( permission on the containing instance. Args: - request (:class:`~.iam_policy.TestIamPermissionsRequest`): + request (google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest): The request object. Request message for `TestIamPermissions` method. - resource (:class:`str`): + resource (str): REQUIRED: The resource for which the policy detail is being requested. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - permissions (:class:`Sequence[str]`): + permissions (Sequence[str]): The set of permissions to check for the ``resource``. Permissions with wildcards (such as '*' or 'storage.*') are not allowed. For more information see `IAM Overview `__. + This corresponds to the ``permissions`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1242,8 +1243,8 @@ def test_iam_permissions( sent along with the request as metadata. Returns: - ~.iam_policy.TestIamPermissionsResponse: - Response message for ``TestIamPermissions`` method. + google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse: + Response message for TestIamPermissions method. """ # Create or coerce a protobuf request object. # Sanity check: If we got a request object, we should *not* have @@ -1307,10 +1308,10 @@ def create_backup( databases can run concurrently. Args: - request (:class:`~.gsad_backup.CreateBackupRequest`): + request (google.cloud.spanner_admin_database_v1.types.CreateBackupRequest): The request object. The request for [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. - parent (:class:`str`): + parent (str): Required. The name of the instance in which the backup will be created. This must be the same instance that contains the database the backup will be created from. @@ -1318,19 +1319,21 @@ def create_backup( in the instance configuration of this instance. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - backup (:class:`~.gsad_backup.Backup`): + backup (google.cloud.spanner_admin_database_v1.types.Backup): Required. The backup to create. This corresponds to the ``backup`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - backup_id (:class:`str`): + backup_id (str): Required. The id of the backup to be created. The ``backup_id`` appended to ``parent`` forms the full backup name of the form ``projects//instances//backups/``. + This corresponds to the ``backup_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1342,12 +1345,12 @@ def create_backup( sent along with the request as metadata. Returns: - ~.operation.Operation: + google.api_core.operation.Operation: An object representing a long-running operation. The result type for the operation will be - :class:``~.gsad_backup.Backup``: A backup of a Cloud - Spanner database. + :class:`google.cloud.spanner_admin_database_v1.types.Backup` + A backup of a Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -1414,12 +1417,13 @@ def get_backup( [Backup][google.spanner.admin.database.v1.Backup]. Args: - request (:class:`~.backup.GetBackupRequest`): + request (google.cloud.spanner_admin_database_v1.types.GetBackupRequest): The request object. The request for [GetBackup][google.spanner.admin.database.v1.DatabaseAdmin.GetBackup]. - name (:class:`str`): + name (str): Required. Name of the backup. Values are of the form ``projects//instances//backups/``. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1431,7 +1435,7 @@ def get_backup( sent along with the request as metadata. Returns: - ~.backup.Backup: + google.cloud.spanner_admin_database_v1.types.Backup: A backup of a Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -1487,10 +1491,10 @@ def update_backup( [Backup][google.spanner.admin.database.v1.Backup]. Args: - request (:class:`~.gsad_backup.UpdateBackupRequest`): + request (google.cloud.spanner_admin_database_v1.types.UpdateBackupRequest): The request object. The request for [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup]. - backup (:class:`~.gsad_backup.Backup`): + backup (google.cloud.spanner_admin_database_v1.types.Backup): Required. The backup to update. ``backup.name``, and the fields to be updated as specified by ``update_mask`` are required. Other fields are ignored. Update is only @@ -1498,10 +1502,11 @@ def update_backup( - ``backup.expire_time``. + This corresponds to the ``backup`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - update_mask (:class:`~.field_mask.FieldMask`): + update_mask (google.protobuf.field_mask_pb2.FieldMask): Required. A mask specifying which fields (e.g. ``expire_time``) in the Backup resource should be updated. This mask is relative to the Backup resource, @@ -1509,6 +1514,7 @@ def update_backup( be specified; this prevents any future fields from being erased accidentally by clients that do not know about them. + This corresponds to the ``update_mask`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1520,7 +1526,7 @@ def update_backup( sent along with the request as metadata. Returns: - ~.gsad_backup.Backup: + google.cloud.spanner_admin_database_v1.types.Backup: A backup of a Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -1579,13 +1585,14 @@ def delete_backup( [Backup][google.spanner.admin.database.v1.Backup]. Args: - request (:class:`~.backup.DeleteBackupRequest`): + request (google.cloud.spanner_admin_database_v1.types.DeleteBackupRequest): The request object. The request for [DeleteBackup][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackup]. - name (:class:`str`): + name (str): Required. Name of the backup to delete. Values are of the form ``projects//instances//backups/``. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1648,12 +1655,13 @@ def list_backups( the most recent ``create_time``. Args: - request (:class:`~.backup.ListBackupsRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListBackupsRequest): The request object. The request for [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. - parent (:class:`str`): + parent (str): Required. The instance to list backups from. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1665,7 +1673,7 @@ def list_backups( sent along with the request as metadata. Returns: - ~.pagers.ListBackupsPager: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupsPager: The response for [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. @@ -1750,31 +1758,34 @@ def restore_database( first restore to complete. Args: - request (:class:`~.spanner_database_admin.RestoreDatabaseRequest`): + request (google.cloud.spanner_admin_database_v1.types.RestoreDatabaseRequest): The request object. The request for [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. - parent (:class:`str`): + parent (str): Required. The name of the instance in which to create the restored database. This instance must be in the same project and have the same instance configuration as the instance containing the source backup. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - database_id (:class:`str`): + database_id (str): Required. The id of the database to create and restore to. This database must not already exist. The ``database_id`` appended to ``parent`` forms the full database name of the form ``projects//instances//databases/``. + This corresponds to the ``database_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - backup (:class:`str`): + backup (str): Name of the backup from which to restore. Values are of the form ``projects//instances//backups/``. + This corresponds to the ``backup`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1786,12 +1797,12 @@ def restore_database( sent along with the request as metadata. Returns: - ~.operation.Operation: + google.api_core.operation.Operation: An object representing a long-running operation. The result type for the operation will be - :class:``~.spanner_database_admin.Database``: A Cloud - Spanner database. + :class:`google.cloud.spanner_admin_database_v1.types.Database` + A Cloud Spanner database. """ # Create or coerce a protobuf request object. @@ -1866,13 +1877,14 @@ def list_database_operations( operations. Args: - request (:class:`~.spanner_database_admin.ListDatabaseOperationsRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsRequest): The request object. The request for [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. - parent (:class:`str`): + parent (str): Required. The instance of the database operations. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1884,9 +1896,9 @@ def list_database_operations( sent along with the request as metadata. Returns: - ~.pagers.ListDatabaseOperationsPager: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabaseOperationsPager: The response for - [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. Iterating over this object will yield results and resolve additional pages automatically. @@ -1962,13 +1974,14 @@ def list_backup_operations( order starting from the most recently started operation. Args: - request (:class:`~.backup.ListBackupOperationsRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListBackupOperationsRequest): The request object. The request for [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. - parent (:class:`str`): + parent (str): Required. The instance of the backup operations. Values are of the form ``projects//instances/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1980,9 +1993,9 @@ def list_backup_operations( sent along with the request as metadata. Returns: - ~.pagers.ListBackupOperationsPager: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupOperationsPager: The response for - [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. Iterating over this object will yield results and resolve additional pages automatically. diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/pagers.py b/google/cloud/spanner_admin_database_v1/services/database_admin/pagers.py index ee2a12f33e..4e5ea62e3f 100644 --- a/google/cloud/spanner_admin_database_v1/services/database_admin/pagers.py +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/pagers.py @@ -26,7 +26,7 @@ class ListDatabasesPager: """A pager for iterating through ``list_databases`` requests. This class thinly wraps an initial - :class:`~.spanner_database_admin.ListDatabasesResponse` object, and + :class:`google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse` object, and provides an ``__iter__`` method to iterate through its ``databases`` field. @@ -35,7 +35,7 @@ class ListDatabasesPager: through the ``databases`` field on the corresponding responses. - All the usual :class:`~.spanner_database_admin.ListDatabasesResponse` + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -53,9 +53,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.spanner_database_admin.ListDatabasesRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListDatabasesRequest): The initial request object. - response (:class:`~.spanner_database_admin.ListDatabasesResponse`): + response (google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -88,7 +88,7 @@ class ListDatabasesAsyncPager: """A pager for iterating through ``list_databases`` requests. This class thinly wraps an initial - :class:`~.spanner_database_admin.ListDatabasesResponse` object, and + :class:`google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse` object, and provides an ``__aiter__`` method to iterate through its ``databases`` field. @@ -97,7 +97,7 @@ class ListDatabasesAsyncPager: through the ``databases`` field on the corresponding responses. - All the usual :class:`~.spanner_database_admin.ListDatabasesResponse` + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -115,9 +115,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.spanner_database_admin.ListDatabasesRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListDatabasesRequest): The initial request object. - response (:class:`~.spanner_database_admin.ListDatabasesResponse`): + response (google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -156,7 +156,7 @@ class ListBackupsPager: """A pager for iterating through ``list_backups`` requests. This class thinly wraps an initial - :class:`~.backup.ListBackupsResponse` object, and + :class:`google.cloud.spanner_admin_database_v1.types.ListBackupsResponse` object, and provides an ``__iter__`` method to iterate through its ``backups`` field. @@ -165,7 +165,7 @@ class ListBackupsPager: through the ``backups`` field on the corresponding responses. - All the usual :class:`~.backup.ListBackupsResponse` + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListBackupsResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -183,9 +183,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.backup.ListBackupsRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListBackupsRequest): The initial request object. - response (:class:`~.backup.ListBackupsResponse`): + response (google.cloud.spanner_admin_database_v1.types.ListBackupsResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -218,7 +218,7 @@ class ListBackupsAsyncPager: """A pager for iterating through ``list_backups`` requests. This class thinly wraps an initial - :class:`~.backup.ListBackupsResponse` object, and + :class:`google.cloud.spanner_admin_database_v1.types.ListBackupsResponse` object, and provides an ``__aiter__`` method to iterate through its ``backups`` field. @@ -227,7 +227,7 @@ class ListBackupsAsyncPager: through the ``backups`` field on the corresponding responses. - All the usual :class:`~.backup.ListBackupsResponse` + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListBackupsResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -245,9 +245,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.backup.ListBackupsRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListBackupsRequest): The initial request object. - response (:class:`~.backup.ListBackupsResponse`): + response (google.cloud.spanner_admin_database_v1.types.ListBackupsResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -284,7 +284,7 @@ class ListDatabaseOperationsPager: """A pager for iterating through ``list_database_operations`` requests. This class thinly wraps an initial - :class:`~.spanner_database_admin.ListDatabaseOperationsResponse` object, and + :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse` object, and provides an ``__iter__`` method to iterate through its ``operations`` field. @@ -293,7 +293,7 @@ class ListDatabaseOperationsPager: through the ``operations`` field on the corresponding responses. - All the usual :class:`~.spanner_database_admin.ListDatabaseOperationsResponse` + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -311,9 +311,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.spanner_database_admin.ListDatabaseOperationsRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsRequest): The initial request object. - response (:class:`~.spanner_database_admin.ListDatabaseOperationsResponse`): + response (google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -346,7 +346,7 @@ class ListDatabaseOperationsAsyncPager: """A pager for iterating through ``list_database_operations`` requests. This class thinly wraps an initial - :class:`~.spanner_database_admin.ListDatabaseOperationsResponse` object, and + :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse` object, and provides an ``__aiter__`` method to iterate through its ``operations`` field. @@ -355,7 +355,7 @@ class ListDatabaseOperationsAsyncPager: through the ``operations`` field on the corresponding responses. - All the usual :class:`~.spanner_database_admin.ListDatabaseOperationsResponse` + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -375,9 +375,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.spanner_database_admin.ListDatabaseOperationsRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsRequest): The initial request object. - response (:class:`~.spanner_database_admin.ListDatabaseOperationsResponse`): + response (google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -416,7 +416,7 @@ class ListBackupOperationsPager: """A pager for iterating through ``list_backup_operations`` requests. This class thinly wraps an initial - :class:`~.backup.ListBackupOperationsResponse` object, and + :class:`google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse` object, and provides an ``__iter__`` method to iterate through its ``operations`` field. @@ -425,7 +425,7 @@ class ListBackupOperationsPager: through the ``operations`` field on the corresponding responses. - All the usual :class:`~.backup.ListBackupOperationsResponse` + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -443,9 +443,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.backup.ListBackupOperationsRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListBackupOperationsRequest): The initial request object. - response (:class:`~.backup.ListBackupOperationsResponse`): + response (google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -478,7 +478,7 @@ class ListBackupOperationsAsyncPager: """A pager for iterating through ``list_backup_operations`` requests. This class thinly wraps an initial - :class:`~.backup.ListBackupOperationsResponse` object, and + :class:`google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse` object, and provides an ``__aiter__`` method to iterate through its ``operations`` field. @@ -487,7 +487,7 @@ class ListBackupOperationsAsyncPager: through the ``operations`` field on the corresponding responses. - All the usual :class:`~.backup.ListBackupOperationsResponse` + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -505,9 +505,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.backup.ListBackupOperationsRequest`): + request (google.cloud.spanner_admin_database_v1.types.ListBackupOperationsRequest): The initial request object. - response (:class:`~.backup.ListBackupOperationsResponse`): + response (google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc.py b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc.py index e8a0a6f93d..665ed4fc15 100644 --- a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc.py +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc.py @@ -69,6 +69,7 @@ def __init__( api_mtls_endpoint: str = None, client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, ssl_channel_credentials: grpc.ChannelCredentials = None, + client_cert_source_for_mtls: Callable[[], Tuple[bytes, bytes]] = None, quota_project_id: Optional[str] = None, client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, ) -> None: @@ -99,6 +100,10 @@ def __init__( ``api_mtls_endpoint`` is None. ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials for grpc channel. It is ignored if ``channel`` is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure mutual TLS channel. It is + ignored if ``channel`` or ``ssl_channel_credentials`` is provided. quota_project_id (Optional[str]): An optional project to use for billing and quota. client_info (google.api_core.gapic_v1.client_info.ClientInfo): @@ -115,6 +120,11 @@ def __init__( """ self._ssl_channel_credentials = ssl_channel_credentials + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + if channel: # Sanity check: Ensure that channel and credentials are not both # provided. @@ -124,11 +134,6 @@ def __init__( self._grpc_channel = channel self._ssl_channel_credentials = None elif api_mtls_endpoint: - warnings.warn( - "api_mtls_endpoint and client_cert_source are deprecated", - DeprecationWarning, - ) - host = ( api_mtls_endpoint if ":" in api_mtls_endpoint @@ -172,12 +177,18 @@ def __init__( scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id ) + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + # create a new channel. The provided one is ignored. self._grpc_channel = type(self).create_channel( host, credentials=credentials, credentials_file=credentials_file, - ssl_credentials=ssl_channel_credentials, + ssl_credentials=self._ssl_channel_credentials, scopes=scopes or self.AUTH_SCOPES, quota_project_id=quota_project_id, options=[ diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc_asyncio.py b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc_asyncio.py index 7a83120018..25229d58cd 100644 --- a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc_asyncio.py +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc_asyncio.py @@ -113,6 +113,7 @@ def __init__( api_mtls_endpoint: str = None, client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, ssl_channel_credentials: grpc.ChannelCredentials = None, + client_cert_source_for_mtls: Callable[[], Tuple[bytes, bytes]] = None, quota_project_id=None, client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, ) -> None: @@ -144,6 +145,10 @@ def __init__( ``api_mtls_endpoint`` is None. ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials for grpc channel. It is ignored if ``channel`` is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure mutual TLS channel. It is + ignored if ``channel`` or ``ssl_channel_credentials`` is provided. quota_project_id (Optional[str]): An optional project to use for billing and quota. client_info (google.api_core.gapic_v1.client_info.ClientInfo): @@ -160,6 +165,11 @@ def __init__( """ self._ssl_channel_credentials = ssl_channel_credentials + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + if channel: # Sanity check: Ensure that channel and credentials are not both # provided. @@ -169,11 +179,6 @@ def __init__( self._grpc_channel = channel self._ssl_channel_credentials = None elif api_mtls_endpoint: - warnings.warn( - "api_mtls_endpoint and client_cert_source are deprecated", - DeprecationWarning, - ) - host = ( api_mtls_endpoint if ":" in api_mtls_endpoint @@ -217,12 +222,18 @@ def __init__( scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id ) + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + # create a new channel. The provided one is ignored. self._grpc_channel = type(self).create_channel( host, credentials=credentials, credentials_file=credentials_file, - ssl_credentials=ssl_channel_credentials, + ssl_credentials=self._ssl_channel_credentials, scopes=scopes or self.AUTH_SCOPES, quota_project_id=quota_project_id, options=[ diff --git a/google/cloud/spanner_admin_database_v1/types/backup.py b/google/cloud/spanner_admin_database_v1/types/backup.py index 4ab6237f04..6062cc5444 100644 --- a/google/cloud/spanner_admin_database_v1/types/backup.py +++ b/google/cloud/spanner_admin_database_v1/types/backup.py @@ -53,7 +53,12 @@ class Backup(proto.Message): created. This needs to be in the same instance as the backup. Values are of the form ``projects//instances//databases/``. - expire_time (~.timestamp.Timestamp): + version_time (google.protobuf.timestamp_pb2.Timestamp): + The backup will contain an externally consistent copy of the + database at the timestamp specified by ``version_time``. If + ``version_time`` is not specified, the system will set + ``version_time`` to the ``create_time`` of the backup. + expire_time (google.protobuf.timestamp_pb2.Timestamp): Required for the [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] operation. The expiration time of the backup, with @@ -79,16 +84,15 @@ class Backup(proto.Message): instance configuration of the instance containing the backup, identified by the prefix of the backup name of the form ``projects//instances/``. - create_time (~.timestamp.Timestamp): - Output only. The backup will contain an externally - consistent copy of the database at the timestamp specified - by ``create_time``. ``create_time`` is approximately the - time the + create_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. The time the [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] - request is received. + request is received. If the request does not specify + ``version_time``, the ``version_time`` of the backup will be + equivalent to the ``create_time``. size_bytes (int): Output only. Size of the backup in bytes. - state (~.gsad_backup.Backup.State): + state (google.cloud.spanner_admin_database_v1.types.Backup.State): Output only. The current state of the backup. referencing_databases (Sequence[str]): Output only. The names of the restored databases that @@ -109,6 +113,8 @@ class State(proto.Enum): database = proto.Field(proto.STRING, number=2) + version_time = proto.Field(proto.MESSAGE, number=9, message=timestamp.Timestamp,) + expire_time = proto.Field(proto.MESSAGE, number=3, message=timestamp.Timestamp,) name = proto.Field(proto.STRING, number=1) @@ -139,7 +145,7 @@ class CreateBackupRequest(proto.Message): ``backup_id`` appended to ``parent`` forms the full backup name of the form ``projects//instances//backups/``. - backup (~.gsad_backup.Backup): + backup (google.cloud.spanner_admin_database_v1.types.Backup): Required. The backup to create. """ @@ -160,11 +166,11 @@ class CreateBackupMetadata(proto.Message): database (str): The name of the database the backup is created from. - progress (~.common.OperationProgress): + progress (google.cloud.spanner_admin_database_v1.types.OperationProgress): The progress of the [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] operation. - cancel_time (~.timestamp.Timestamp): + cancel_time (google.protobuf.timestamp_pb2.Timestamp): The time at which cancellation of this operation was received. [Operations.CancelOperation][google.longrunning.Operations.CancelOperation] @@ -195,14 +201,14 @@ class UpdateBackupRequest(proto.Message): [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup]. Attributes: - backup (~.gsad_backup.Backup): + backup (google.cloud.spanner_admin_database_v1.types.Backup): Required. The backup to update. ``backup.name``, and the fields to be updated as specified by ``update_mask`` are required. Other fields are ignored. Update is only supported for the following fields: - ``backup.expire_time``. - update_mask (~.field_mask.FieldMask): + update_mask (google.protobuf.field_mask_pb2.FieldMask): Required. A mask specifying which fields (e.g. ``expire_time``) in the Backup resource should be updated. This mask is relative to the Backup resource, not to the @@ -322,7 +328,7 @@ class ListBackupsResponse(proto.Message): [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. Attributes: - backups (Sequence[~.gsad_backup.Backup]): + backups (Sequence[google.cloud.spanner_admin_database_v1.types.Backup]): The list of matching backups. Backups returned are ordered by ``create_time`` in descending order, starting from the most recent ``create_time``. @@ -424,7 +430,7 @@ class ListBackupOperationsResponse(proto.Message): [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. Attributes: - operations (Sequence[~.gl_operations.Operation]): + operations (Sequence[google.longrunning.operations_pb2.Operation]): The list of matching backup [long-running operations][google.longrunning.Operation]. Each operation's name will be prefixed by the backup's name and the @@ -461,10 +467,18 @@ class BackupInfo(proto.Message): Attributes: backup (str): Name of the backup. - create_time (~.timestamp.Timestamp): + version_time (google.protobuf.timestamp_pb2.Timestamp): The backup contains an externally consistent copy of ``source_database`` at the timestamp specified by + ``version_time``. If the + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] + request did not specify ``version_time``, the + ``version_time`` of the backup is equivalent to the ``create_time``. + create_time (google.protobuf.timestamp_pb2.Timestamp): + The time the + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] + request was received. source_database (str): Name of the database the backup was created from. @@ -472,6 +486,8 @@ class BackupInfo(proto.Message): backup = proto.Field(proto.STRING, number=1) + version_time = proto.Field(proto.MESSAGE, number=4, message=timestamp.Timestamp,) + create_time = proto.Field(proto.MESSAGE, number=2, message=timestamp.Timestamp,) source_database = proto.Field(proto.STRING, number=3) diff --git a/google/cloud/spanner_admin_database_v1/types/common.py b/google/cloud/spanner_admin_database_v1/types/common.py index ccd8de2819..c43dbdb580 100644 --- a/google/cloud/spanner_admin_database_v1/types/common.py +++ b/google/cloud/spanner_admin_database_v1/types/common.py @@ -34,9 +34,9 @@ class OperationProgress(proto.Message): progress_percent (int): Percent completion of the operation. Values are between 0 and 100 inclusive. - start_time (~.timestamp.Timestamp): + start_time (google.protobuf.timestamp_pb2.Timestamp): Time the request was received. - end_time (~.timestamp.Timestamp): + end_time (google.protobuf.timestamp_pb2.Timestamp): If set, the time at which this operation failed or was completed successfully. """ diff --git a/google/cloud/spanner_admin_database_v1/types/spanner_database_admin.py b/google/cloud/spanner_admin_database_v1/types/spanner_database_admin.py index e99d200906..fce6a20e31 100644 --- a/google/cloud/spanner_admin_database_v1/types/spanner_database_admin.py +++ b/google/cloud/spanner_admin_database_v1/types/spanner_database_admin.py @@ -59,9 +59,9 @@ class RestoreInfo(proto.Message): r"""Information about the database restore. Attributes: - source_type (~.spanner_database_admin.RestoreSourceType): + source_type (google.cloud.spanner_admin_database_v1.types.RestoreSourceType): The type of the restore source. - backup_info (~.gsad_backup.BackupInfo): + backup_info (google.cloud.spanner_admin_database_v1.types.BackupInfo): Information about the backup used to restore the database. The backup may no longer exist. """ @@ -83,15 +83,24 @@ class Database(proto.Message): where ```` is as specified in the ``CREATE DATABASE`` statement. This name can be passed to other API methods to identify the database. - state (~.spanner_database_admin.Database.State): + state (google.cloud.spanner_admin_database_v1.types.Database.State): Output only. The current database state. - create_time (~.timestamp.Timestamp): + create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. If exists, the time at which the database creation started. - restore_info (~.spanner_database_admin.RestoreInfo): + restore_info (google.cloud.spanner_admin_database_v1.types.RestoreInfo): Output only. Applicable only for restored databases. Contains information about the restore source. + version_retention_period (str): + Output only. The period in which Cloud Spanner retains all + versions of data for the database. This is the same as the + value of version_retention_period database option set using + [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl]. + Defaults to 1 hour, if not set. + earliest_version_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. Earliest timestamp at which + older versions of the data can be read. """ class State(proto.Enum): @@ -109,6 +118,12 @@ class State(proto.Enum): restore_info = proto.Field(proto.MESSAGE, number=4, message="RestoreInfo",) + version_retention_period = proto.Field(proto.STRING, number=6) + + earliest_version_time = proto.Field( + proto.MESSAGE, number=7, message=timestamp.Timestamp, + ) + class ListDatabasesRequest(proto.Message): r"""The request for @@ -142,7 +157,7 @@ class ListDatabasesResponse(proto.Message): [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. Attributes: - databases (Sequence[~.spanner_database_admin.Database]): + databases (Sequence[google.cloud.spanner_admin_database_v1.types.Database]): Databases that matched the request. next_page_token (str): ``next_page_token`` can be sent in a subsequent @@ -283,7 +298,7 @@ class UpdateDatabaseDdlMetadata(proto.Message): For an update this list contains all the statements. For an individual statement, this list contains only that statement. - commit_timestamps (Sequence[~.timestamp.Timestamp]): + commit_timestamps (Sequence[google.protobuf.timestamp_pb2.Timestamp]): Reports the commit timestamps of all statements that have succeeded so far, where ``commit_timestamps[i]`` is the commit timestamp for the statement ``statements[i]``. @@ -324,8 +339,9 @@ class GetDatabaseDdlRequest(proto.Message): Attributes: database (str): - Required. The database whose schema we wish - to get. + Required. The database whose schema we wish to get. Values + are of the form + ``projects//instances//databases/`` """ database = proto.Field(proto.STRING, number=1) @@ -429,7 +445,7 @@ class ListDatabaseOperationsResponse(proto.Message): [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. Attributes: - operations (Sequence[~.gl_operations.Operation]): + operations (Sequence[google.longrunning.operations_pb2.Operation]): The list of matching database [long-running operations][google.longrunning.Operation]. Each operation's name will be prefixed by the database's name. The @@ -491,16 +507,16 @@ class RestoreDatabaseMetadata(proto.Message): name (str): Name of the database being created and restored to. - source_type (~.spanner_database_admin.RestoreSourceType): + source_type (google.cloud.spanner_admin_database_v1.types.RestoreSourceType): The type of the restore source. - backup_info (~.gsad_backup.BackupInfo): + backup_info (google.cloud.spanner_admin_database_v1.types.BackupInfo): Information about the backup used to restore the database. - progress (~.common.OperationProgress): + progress (google.cloud.spanner_admin_database_v1.types.OperationProgress): The progress of the [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase] operation. - cancel_time (~.timestamp.Timestamp): + cancel_time (google.protobuf.timestamp_pb2.Timestamp): The time at which cancellation of this operation was received. [Operations.CancelOperation][google.longrunning.Operations.CancelOperation] @@ -557,7 +573,7 @@ class OptimizeRestoredDatabaseMetadata(proto.Message): name (str): Name of the restored database being optimized. - progress (~.common.OperationProgress): + progress (google.cloud.spanner_admin_database_v1.types.OperationProgress): The progress of the post-restore optimizations. """ diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/async_client.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/async_client.py index fd4cd3d18d..a83b1a2c1d 100644 --- a/google/cloud/spanner_admin_instance_v1/services/instance_admin/async_client.py +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/async_client.py @@ -106,6 +106,7 @@ class InstanceAdminAsyncClient: InstanceAdminClient.parse_common_location_path ) + from_service_account_info = InstanceAdminClient.from_service_account_info from_service_account_file = InstanceAdminClient.from_service_account_file from_service_account_json = from_service_account_file @@ -183,13 +184,14 @@ async def list_instance_configs( given project. Args: - request (:class:`~.spanner_instance_admin.ListInstanceConfigsRequest`): + request (:class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsRequest`): The request object. The request for [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. parent (:class:`str`): Required. The name of the project for which a list of supported instance configurations is requested. Values are of the form ``projects/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -201,7 +203,7 @@ async def list_instance_configs( sent along with the request as metadata. Returns: - ~.pagers.ListInstanceConfigsAsyncPager: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstanceConfigsAsyncPager: The response for [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. @@ -274,13 +276,14 @@ async def get_instance_config( configuration. Args: - request (:class:`~.spanner_instance_admin.GetInstanceConfigRequest`): + request (:class:`google.cloud.spanner_admin_instance_v1.types.GetInstanceConfigRequest`): The request object. The request for [GetInstanceConfigRequest][google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig]. name (:class:`str`): Required. The name of the requested instance configuration. Values are of the form ``projects//instanceConfigs/``. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -292,7 +295,7 @@ async def get_instance_config( sent along with the request as metadata. Returns: - ~.spanner_instance_admin.InstanceConfig: + google.cloud.spanner_admin_instance_v1.types.InstanceConfig: A possible configuration for a Cloud Spanner instance. Configurations define the geographic placement of nodes and @@ -357,13 +360,14 @@ async def list_instances( r"""Lists all instances in the given project. Args: - request (:class:`~.spanner_instance_admin.ListInstancesRequest`): + request (:class:`google.cloud.spanner_admin_instance_v1.types.ListInstancesRequest`): The request object. The request for [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. parent (:class:`str`): Required. The name of the project for which a list of instances is requested. Values are of the form ``projects/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -375,7 +379,7 @@ async def list_instances( sent along with the request as metadata. Returns: - ~.pagers.ListInstancesAsyncPager: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancesAsyncPager: The response for [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. @@ -447,12 +451,13 @@ async def get_instance( r"""Gets information about a particular instance. Args: - request (:class:`~.spanner_instance_admin.GetInstanceRequest`): + request (:class:`google.cloud.spanner_admin_instance_v1.types.GetInstanceRequest`): The request object. The request for [GetInstance][google.spanner.admin.instance.v1.InstanceAdmin.GetInstance]. name (:class:`str`): Required. The name of the requested instance. Values are of the form ``projects//instances/``. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -464,7 +469,7 @@ async def get_instance( sent along with the request as metadata. Returns: - ~.spanner_instance_admin.Instance: + google.cloud.spanner_admin_instance_v1.types.Instance: An isolated set of Cloud Spanner resources on which databases can be hosted. @@ -567,12 +572,13 @@ async def create_instance( successful. Args: - request (:class:`~.spanner_instance_admin.CreateInstanceRequest`): + request (:class:`google.cloud.spanner_admin_instance_v1.types.CreateInstanceRequest`): The request object. The request for [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. parent (:class:`str`): Required. The name of the project in which to create the instance. Values are of the form ``projects/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -580,13 +586,15 @@ async def create_instance( Required. The ID of the instance to create. Valid identifiers are of the form ``[a-z][-a-z0-9]*[a-z0-9]`` and must be between 2 and 64 characters in length. + This corresponds to the ``instance_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - instance (:class:`~.spanner_instance_admin.Instance`): + instance (:class:`google.cloud.spanner_admin_instance_v1.types.Instance`): Required. The instance to create. The name may be omitted, but if specified must be ``/instances/``. + This corresponds to the ``instance`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -598,12 +606,12 @@ async def create_instance( sent along with the request as metadata. Returns: - ~.operation_async.AsyncOperation: + google.api_core.operation_async.AsyncOperation: An object representing a long-running operation. The result type for the operation will be - :class:``~.spanner_instance_admin.Instance``: An - isolated set of Cloud Spanner resources on which + :class:`google.cloud.spanner_admin_instance_v1.types.Instance` + An isolated set of Cloud Spanner resources on which databases can be hosted. """ @@ -714,19 +722,20 @@ async def update_instance( [name][google.spanner.admin.instance.v1.Instance.name]. Args: - request (:class:`~.spanner_instance_admin.UpdateInstanceRequest`): + request (:class:`google.cloud.spanner_admin_instance_v1.types.UpdateInstanceRequest`): The request object. The request for [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. - instance (:class:`~.spanner_instance_admin.Instance`): + instance (:class:`google.cloud.spanner_admin_instance_v1.types.Instance`): Required. The instance to update, which must always include the instance name. Otherwise, only fields mentioned in [field_mask][google.spanner.admin.instance.v1.UpdateInstanceRequest.field_mask] need be included. + This corresponds to the ``instance`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - field_mask (:class:`~.gp_field_mask.FieldMask`): + field_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`): Required. A mask specifying which fields in [Instance][google.spanner.admin.instance.v1.Instance] should be updated. The field mask must always be @@ -734,6 +743,7 @@ async def update_instance( [Instance][google.spanner.admin.instance.v1.Instance] from being erased accidentally by clients that do not know about them. + This corresponds to the ``field_mask`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -745,12 +755,12 @@ async def update_instance( sent along with the request as metadata. Returns: - ~.operation_async.AsyncOperation: + google.api_core.operation_async.AsyncOperation: An object representing a long-running operation. The result type for the operation will be - :class:``~.spanner_instance_admin.Instance``: An - isolated set of Cloud Spanner resources on which + :class:`google.cloud.spanner_admin_instance_v1.types.Instance` + An isolated set of Cloud Spanner resources on which databases can be hosted. """ @@ -826,13 +836,14 @@ async def delete_instance( is permanently deleted. Args: - request (:class:`~.spanner_instance_admin.DeleteInstanceRequest`): + request (:class:`google.cloud.spanner_admin_instance_v1.types.DeleteInstanceRequest`): The request object. The request for [DeleteInstance][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance]. name (:class:`str`): Required. The name of the instance to be deleted. Values are of the form ``projects//instances/`` + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -904,7 +915,7 @@ async def set_iam_policy( [resource][google.iam.v1.SetIamPolicyRequest.resource]. Args: - request (:class:`~.iam_policy.SetIamPolicyRequest`): + request (:class:`google.iam.v1.iam_policy_pb2.SetIamPolicyRequest`): The request object. Request message for `SetIamPolicy` method. resource (:class:`str`): @@ -912,6 +923,7 @@ async def set_iam_policy( policy is being specified. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -923,72 +935,62 @@ async def set_iam_policy( sent along with the request as metadata. Returns: - ~.policy.Policy: - Defines an Identity and Access Management (IAM) policy. - It is used to specify access control policies for Cloud - Platform resources. - - A ``Policy`` is a collection of ``bindings``. A - ``binding`` binds one or more ``members`` to a single - ``role``. Members can be user accounts, service - accounts, Google groups, and domains (such as G Suite). - A ``role`` is a named list of permissions (defined by - IAM or configured by users). A ``binding`` can - optionally specify a ``condition``, which is a logic - expression that further constrains the role binding - based on attributes about the request and/or target - resource. - - **JSON Example** - - :: - - { - "bindings": [ - { - "role": "roles/resourcemanager.organizationAdmin", - "members": [ - "user:mike@example.com", - "group:admins@example.com", - "domain:google.com", - "serviceAccount:my-project-id@appspot.gserviceaccount.com" - ] - }, - { - "role": "roles/resourcemanager.organizationViewer", - "members": ["user:eve@example.com"], - "condition": { - "title": "expirable access", - "description": "Does not grant access after Sep 2020", - "expression": "request.time < - timestamp('2020-10-01T00:00:00.000Z')", - } - } - ] - } - - **YAML Example** - - :: - - bindings: - - members: - - user:mike@example.com - - group:admins@example.com - - domain:google.com - - serviceAccount:my-project-id@appspot.gserviceaccount.com - role: roles/resourcemanager.organizationAdmin - - members: - - user:eve@example.com - role: roles/resourcemanager.organizationViewer - condition: - title: expirable access - description: Does not grant access after Sep 2020 - expression: request.time < timestamp('2020-10-01T00:00:00.000Z') - - For a description of IAM and its features, see the `IAM - developer's - guide `__. + google.iam.v1.policy_pb2.Policy: + Defines an Identity and Access Management (IAM) policy. It is used to + specify access control policies for Cloud Platform + resources. + + A Policy is a collection of bindings. A binding binds + one or more members to a single role. Members can be + user accounts, service accounts, Google groups, and + domains (such as G Suite). A role is a named list of + permissions (defined by IAM or configured by users). + A binding can optionally specify a condition, which + is a logic expression that further constrains the + role binding based on attributes about the request + and/or target resource. + + **JSON Example** + + { + "bindings": [ + { + "role": + "roles/resourcemanager.organizationAdmin", + "members": [ "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + + }, { "role": + "roles/resourcemanager.organizationViewer", + "members": ["user:eve@example.com"], + "condition": { "title": "expirable access", + "description": "Does not grant access after + Sep 2020", "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", } } + + ] + + } + + **YAML Example** + + bindings: - members: - user:\ mike@example.com - + group:\ admins@example.com - domain:google.com - + serviceAccount:\ my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin - + members: - user:\ eve@example.com role: + roles/resourcemanager.organizationViewer + condition: title: expirable access description: + Does not grant access after Sep 2020 expression: + request.time < + timestamp('2020-10-01T00:00:00.000Z') + + For a description of IAM and its features, see the + [IAM developer's + guide](\ https://cloud.google.com/iam/docs). """ # Create or coerce a protobuf request object. @@ -1046,7 +1048,7 @@ async def get_iam_policy( [resource][google.iam.v1.GetIamPolicyRequest.resource]. Args: - request (:class:`~.iam_policy.GetIamPolicyRequest`): + request (:class:`google.iam.v1.iam_policy_pb2.GetIamPolicyRequest`): The request object. Request message for `GetIamPolicy` method. resource (:class:`str`): @@ -1054,6 +1056,7 @@ async def get_iam_policy( policy is being requested. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1065,72 +1068,62 @@ async def get_iam_policy( sent along with the request as metadata. Returns: - ~.policy.Policy: - Defines an Identity and Access Management (IAM) policy. - It is used to specify access control policies for Cloud - Platform resources. - - A ``Policy`` is a collection of ``bindings``. A - ``binding`` binds one or more ``members`` to a single - ``role``. Members can be user accounts, service - accounts, Google groups, and domains (such as G Suite). - A ``role`` is a named list of permissions (defined by - IAM or configured by users). A ``binding`` can - optionally specify a ``condition``, which is a logic - expression that further constrains the role binding - based on attributes about the request and/or target - resource. - - **JSON Example** - - :: - - { - "bindings": [ - { - "role": "roles/resourcemanager.organizationAdmin", - "members": [ - "user:mike@example.com", - "group:admins@example.com", - "domain:google.com", - "serviceAccount:my-project-id@appspot.gserviceaccount.com" - ] - }, - { - "role": "roles/resourcemanager.organizationViewer", - "members": ["user:eve@example.com"], - "condition": { - "title": "expirable access", - "description": "Does not grant access after Sep 2020", - "expression": "request.time < - timestamp('2020-10-01T00:00:00.000Z')", - } - } - ] - } - - **YAML Example** - - :: - - bindings: - - members: - - user:mike@example.com - - group:admins@example.com - - domain:google.com - - serviceAccount:my-project-id@appspot.gserviceaccount.com - role: roles/resourcemanager.organizationAdmin - - members: - - user:eve@example.com - role: roles/resourcemanager.organizationViewer - condition: - title: expirable access - description: Does not grant access after Sep 2020 - expression: request.time < timestamp('2020-10-01T00:00:00.000Z') - - For a description of IAM and its features, see the `IAM - developer's - guide `__. + google.iam.v1.policy_pb2.Policy: + Defines an Identity and Access Management (IAM) policy. It is used to + specify access control policies for Cloud Platform + resources. + + A Policy is a collection of bindings. A binding binds + one or more members to a single role. Members can be + user accounts, service accounts, Google groups, and + domains (such as G Suite). A role is a named list of + permissions (defined by IAM or configured by users). + A binding can optionally specify a condition, which + is a logic expression that further constrains the + role binding based on attributes about the request + and/or target resource. + + **JSON Example** + + { + "bindings": [ + { + "role": + "roles/resourcemanager.organizationAdmin", + "members": [ "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + + }, { "role": + "roles/resourcemanager.organizationViewer", + "members": ["user:eve@example.com"], + "condition": { "title": "expirable access", + "description": "Does not grant access after + Sep 2020", "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", } } + + ] + + } + + **YAML Example** + + bindings: - members: - user:\ mike@example.com - + group:\ admins@example.com - domain:google.com - + serviceAccount:\ my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin - + members: - user:\ eve@example.com role: + roles/resourcemanager.organizationViewer + condition: title: expirable access description: + Does not grant access after Sep 2020 expression: + request.time < + timestamp('2020-10-01T00:00:00.000Z') + + For a description of IAM and its features, see the + [IAM developer's + guide](\ https://cloud.google.com/iam/docs). """ # Create or coerce a protobuf request object. @@ -1198,7 +1191,7 @@ async def test_iam_permissions( Cloud Project. Otherwise returns an empty set of permissions. Args: - request (:class:`~.iam_policy.TestIamPermissionsRequest`): + request (:class:`google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest`): The request object. Request message for `TestIamPermissions` method. resource (:class:`str`): @@ -1206,6 +1199,7 @@ async def test_iam_permissions( policy detail is being requested. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1214,6 +1208,7 @@ async def test_iam_permissions( Permissions with wildcards (such as '*' or 'storage.*') are not allowed. For more information see `IAM Overview `__. + This corresponds to the ``permissions`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1225,8 +1220,8 @@ async def test_iam_permissions( sent along with the request as metadata. Returns: - ~.iam_policy.TestIamPermissionsResponse: - Response message for ``TestIamPermissions`` method. + google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse: + Response message for TestIamPermissions method. """ # Create or coerce a protobuf request object. # Sanity check: If we got a request object, we should *not* have diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/client.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/client.py index c82a2065bc..369d9fcced 100644 --- a/google/cloud/spanner_admin_instance_v1/services/instance_admin/client.py +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/client.py @@ -134,6 +134,22 @@ def _get_default_mtls_endpoint(api_endpoint): DEFAULT_ENDPOINT ) + @classmethod + def from_service_account_info(cls, info: dict, *args, **kwargs): + """Creates an instance of this client using the provided credentials info. + + Args: + info (dict): The service account private key info. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + InstanceAdminClient: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_info(info) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + @classmethod def from_service_account_file(cls, filename: str, *args, **kwargs): """Creates an instance of this client using the provided credentials @@ -146,7 +162,7 @@ def from_service_account_file(cls, filename: str, *args, **kwargs): kwargs: Additional arguments to pass to the constructor. Returns: - {@api.name}: The constructed client. + InstanceAdminClient: The constructed client. """ credentials = service_account.Credentials.from_service_account_file(filename) kwargs["credentials"] = credentials @@ -267,10 +283,10 @@ def __init__( credentials identify the application to the service; if none are specified, the client will attempt to ascertain the credentials from the environment. - transport (Union[str, ~.InstanceAdminTransport]): The + transport (Union[str, InstanceAdminTransport]): The transport to use. If set to None, a transport is chosen automatically. - client_options (client_options_lib.ClientOptions): Custom options for the + client_options (google.api_core.client_options.ClientOptions): Custom options for the client. It won't take effect if a ``transport`` instance is provided. (1) The ``api_endpoint`` property can be used to override the default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT @@ -306,21 +322,17 @@ def __init__( util.strtobool(os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false")) ) - ssl_credentials = None + client_cert_source_func = None is_mtls = False if use_client_cert: if client_options.client_cert_source: - import grpc # type: ignore - - cert, key = client_options.client_cert_source() - ssl_credentials = grpc.ssl_channel_credentials( - certificate_chain=cert, private_key=key - ) is_mtls = True + client_cert_source_func = client_options.client_cert_source else: - creds = SslCredentials() - is_mtls = creds.is_mtls - ssl_credentials = creds.ssl_credentials if is_mtls else None + is_mtls = mtls.has_default_client_cert_source() + client_cert_source_func = ( + mtls.default_client_cert_source() if is_mtls else None + ) # Figure out which api endpoint to use. if client_options.api_endpoint is not None: @@ -363,7 +375,7 @@ def __init__( credentials_file=client_options.credentials_file, host=api_endpoint, scopes=client_options.scopes, - ssl_channel_credentials=ssl_credentials, + client_cert_source_for_mtls=client_cert_source_func, quota_project_id=client_options.quota_project_id, client_info=client_info, ) @@ -381,13 +393,14 @@ def list_instance_configs( given project. Args: - request (:class:`~.spanner_instance_admin.ListInstanceConfigsRequest`): + request (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsRequest): The request object. The request for [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. - parent (:class:`str`): + parent (str): Required. The name of the project for which a list of supported instance configurations is requested. Values are of the form ``projects/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -399,7 +412,7 @@ def list_instance_configs( sent along with the request as metadata. Returns: - ~.pagers.ListInstanceConfigsPager: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstanceConfigsPager: The response for [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. @@ -465,13 +478,14 @@ def get_instance_config( configuration. Args: - request (:class:`~.spanner_instance_admin.GetInstanceConfigRequest`): + request (google.cloud.spanner_admin_instance_v1.types.GetInstanceConfigRequest): The request object. The request for [GetInstanceConfigRequest][google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig]. - name (:class:`str`): + name (str): Required. The name of the requested instance configuration. Values are of the form ``projects//instanceConfigs/``. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -483,7 +497,7 @@ def get_instance_config( sent along with the request as metadata. Returns: - ~.spanner_instance_admin.InstanceConfig: + google.cloud.spanner_admin_instance_v1.types.InstanceConfig: A possible configuration for a Cloud Spanner instance. Configurations define the geographic placement of nodes and @@ -541,13 +555,14 @@ def list_instances( r"""Lists all instances in the given project. Args: - request (:class:`~.spanner_instance_admin.ListInstancesRequest`): + request (google.cloud.spanner_admin_instance_v1.types.ListInstancesRequest): The request object. The request for [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. - parent (:class:`str`): + parent (str): Required. The name of the project for which a list of instances is requested. Values are of the form ``projects/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -559,7 +574,7 @@ def list_instances( sent along with the request as metadata. Returns: - ~.pagers.ListInstancesPager: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancesPager: The response for [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. @@ -624,12 +639,13 @@ def get_instance( r"""Gets information about a particular instance. Args: - request (:class:`~.spanner_instance_admin.GetInstanceRequest`): + request (google.cloud.spanner_admin_instance_v1.types.GetInstanceRequest): The request object. The request for [GetInstance][google.spanner.admin.instance.v1.InstanceAdmin.GetInstance]. - name (:class:`str`): + name (str): Required. The name of the requested instance. Values are of the form ``projects//instances/``. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -641,7 +657,7 @@ def get_instance( sent along with the request as metadata. Returns: - ~.spanner_instance_admin.Instance: + google.cloud.spanner_admin_instance_v1.types.Instance: An isolated set of Cloud Spanner resources on which databases can be hosted. @@ -737,26 +753,29 @@ def create_instance( successful. Args: - request (:class:`~.spanner_instance_admin.CreateInstanceRequest`): + request (google.cloud.spanner_admin_instance_v1.types.CreateInstanceRequest): The request object. The request for [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. - parent (:class:`str`): + parent (str): Required. The name of the project in which to create the instance. Values are of the form ``projects/``. + This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - instance_id (:class:`str`): + instance_id (str): Required. The ID of the instance to create. Valid identifiers are of the form ``[a-z][-a-z0-9]*[a-z0-9]`` and must be between 2 and 64 characters in length. + This corresponds to the ``instance_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - instance (:class:`~.spanner_instance_admin.Instance`): + instance (google.cloud.spanner_admin_instance_v1.types.Instance): Required. The instance to create. The name may be omitted, but if specified must be ``/instances/``. + This corresponds to the ``instance`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -768,12 +787,12 @@ def create_instance( sent along with the request as metadata. Returns: - ~.operation.Operation: + google.api_core.operation.Operation: An object representing a long-running operation. The result type for the operation will be - :class:``~.spanner_instance_admin.Instance``: An - isolated set of Cloud Spanner resources on which + :class:`google.cloud.spanner_admin_instance_v1.types.Instance` + An isolated set of Cloud Spanner resources on which databases can be hosted. """ @@ -885,19 +904,20 @@ def update_instance( [name][google.spanner.admin.instance.v1.Instance.name]. Args: - request (:class:`~.spanner_instance_admin.UpdateInstanceRequest`): + request (google.cloud.spanner_admin_instance_v1.types.UpdateInstanceRequest): The request object. The request for [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. - instance (:class:`~.spanner_instance_admin.Instance`): + instance (google.cloud.spanner_admin_instance_v1.types.Instance): Required. The instance to update, which must always include the instance name. Otherwise, only fields mentioned in [field_mask][google.spanner.admin.instance.v1.UpdateInstanceRequest.field_mask] need be included. + This corresponds to the ``instance`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - field_mask (:class:`~.gp_field_mask.FieldMask`): + field_mask (google.protobuf.field_mask_pb2.FieldMask): Required. A mask specifying which fields in [Instance][google.spanner.admin.instance.v1.Instance] should be updated. The field mask must always be @@ -905,6 +925,7 @@ def update_instance( [Instance][google.spanner.admin.instance.v1.Instance] from being erased accidentally by clients that do not know about them. + This corresponds to the ``field_mask`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -916,12 +937,12 @@ def update_instance( sent along with the request as metadata. Returns: - ~.operation.Operation: + google.api_core.operation.Operation: An object representing a long-running operation. The result type for the operation will be - :class:``~.spanner_instance_admin.Instance``: An - isolated set of Cloud Spanner resources on which + :class:`google.cloud.spanner_admin_instance_v1.types.Instance` + An isolated set of Cloud Spanner resources on which databases can be hosted. """ @@ -998,13 +1019,14 @@ def delete_instance( is permanently deleted. Args: - request (:class:`~.spanner_instance_admin.DeleteInstanceRequest`): + request (google.cloud.spanner_admin_instance_v1.types.DeleteInstanceRequest): The request object. The request for [DeleteInstance][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance]. - name (:class:`str`): + name (str): Required. The name of the instance to be deleted. Values are of the form ``projects//instances/`` + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1069,14 +1091,15 @@ def set_iam_policy( [resource][google.iam.v1.SetIamPolicyRequest.resource]. Args: - request (:class:`~.iam_policy.SetIamPolicyRequest`): + request (google.iam.v1.iam_policy_pb2.SetIamPolicyRequest): The request object. Request message for `SetIamPolicy` method. - resource (:class:`str`): + resource (str): REQUIRED: The resource for which the policy is being specified. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1088,72 +1111,62 @@ def set_iam_policy( sent along with the request as metadata. Returns: - ~.policy.Policy: - Defines an Identity and Access Management (IAM) policy. - It is used to specify access control policies for Cloud - Platform resources. - - A ``Policy`` is a collection of ``bindings``. A - ``binding`` binds one or more ``members`` to a single - ``role``. Members can be user accounts, service - accounts, Google groups, and domains (such as G Suite). - A ``role`` is a named list of permissions (defined by - IAM or configured by users). A ``binding`` can - optionally specify a ``condition``, which is a logic - expression that further constrains the role binding - based on attributes about the request and/or target - resource. - - **JSON Example** - - :: - - { - "bindings": [ - { - "role": "roles/resourcemanager.organizationAdmin", - "members": [ - "user:mike@example.com", - "group:admins@example.com", - "domain:google.com", - "serviceAccount:my-project-id@appspot.gserviceaccount.com" - ] - }, - { - "role": "roles/resourcemanager.organizationViewer", - "members": ["user:eve@example.com"], - "condition": { - "title": "expirable access", - "description": "Does not grant access after Sep 2020", - "expression": "request.time < - timestamp('2020-10-01T00:00:00.000Z')", - } - } - ] - } - - **YAML Example** - - :: - - bindings: - - members: - - user:mike@example.com - - group:admins@example.com - - domain:google.com - - serviceAccount:my-project-id@appspot.gserviceaccount.com - role: roles/resourcemanager.organizationAdmin - - members: - - user:eve@example.com - role: roles/resourcemanager.organizationViewer - condition: - title: expirable access - description: Does not grant access after Sep 2020 - expression: request.time < timestamp('2020-10-01T00:00:00.000Z') - - For a description of IAM and its features, see the `IAM - developer's - guide `__. + google.iam.v1.policy_pb2.Policy: + Defines an Identity and Access Management (IAM) policy. It is used to + specify access control policies for Cloud Platform + resources. + + A Policy is a collection of bindings. A binding binds + one or more members to a single role. Members can be + user accounts, service accounts, Google groups, and + domains (such as G Suite). A role is a named list of + permissions (defined by IAM or configured by users). + A binding can optionally specify a condition, which + is a logic expression that further constrains the + role binding based on attributes about the request + and/or target resource. + + **JSON Example** + + { + "bindings": [ + { + "role": + "roles/resourcemanager.organizationAdmin", + "members": [ "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + + }, { "role": + "roles/resourcemanager.organizationViewer", + "members": ["user:eve@example.com"], + "condition": { "title": "expirable access", + "description": "Does not grant access after + Sep 2020", "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", } } + + ] + + } + + **YAML Example** + + bindings: - members: - user:\ mike@example.com - + group:\ admins@example.com - domain:google.com - + serviceAccount:\ my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin - + members: - user:\ eve@example.com role: + roles/resourcemanager.organizationViewer + condition: title: expirable access description: + Does not grant access after Sep 2020 expression: + request.time < + timestamp('2020-10-01T00:00:00.000Z') + + For a description of IAM and its features, see the + [IAM developer's + guide](\ https://cloud.google.com/iam/docs). """ # Create or coerce a protobuf request object. @@ -1207,14 +1220,15 @@ def get_iam_policy( [resource][google.iam.v1.GetIamPolicyRequest.resource]. Args: - request (:class:`~.iam_policy.GetIamPolicyRequest`): + request (google.iam.v1.iam_policy_pb2.GetIamPolicyRequest): The request object. Request message for `GetIamPolicy` method. - resource (:class:`str`): + resource (str): REQUIRED: The resource for which the policy is being requested. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1226,72 +1240,62 @@ def get_iam_policy( sent along with the request as metadata. Returns: - ~.policy.Policy: - Defines an Identity and Access Management (IAM) policy. - It is used to specify access control policies for Cloud - Platform resources. - - A ``Policy`` is a collection of ``bindings``. A - ``binding`` binds one or more ``members`` to a single - ``role``. Members can be user accounts, service - accounts, Google groups, and domains (such as G Suite). - A ``role`` is a named list of permissions (defined by - IAM or configured by users). A ``binding`` can - optionally specify a ``condition``, which is a logic - expression that further constrains the role binding - based on attributes about the request and/or target - resource. - - **JSON Example** - - :: - - { - "bindings": [ - { - "role": "roles/resourcemanager.organizationAdmin", - "members": [ - "user:mike@example.com", - "group:admins@example.com", - "domain:google.com", - "serviceAccount:my-project-id@appspot.gserviceaccount.com" - ] - }, - { - "role": "roles/resourcemanager.organizationViewer", - "members": ["user:eve@example.com"], - "condition": { - "title": "expirable access", - "description": "Does not grant access after Sep 2020", - "expression": "request.time < - timestamp('2020-10-01T00:00:00.000Z')", - } - } - ] - } - - **YAML Example** - - :: - - bindings: - - members: - - user:mike@example.com - - group:admins@example.com - - domain:google.com - - serviceAccount:my-project-id@appspot.gserviceaccount.com - role: roles/resourcemanager.organizationAdmin - - members: - - user:eve@example.com - role: roles/resourcemanager.organizationViewer - condition: - title: expirable access - description: Does not grant access after Sep 2020 - expression: request.time < timestamp('2020-10-01T00:00:00.000Z') - - For a description of IAM and its features, see the `IAM - developer's - guide `__. + google.iam.v1.policy_pb2.Policy: + Defines an Identity and Access Management (IAM) policy. It is used to + specify access control policies for Cloud Platform + resources. + + A Policy is a collection of bindings. A binding binds + one or more members to a single role. Members can be + user accounts, service accounts, Google groups, and + domains (such as G Suite). A role is a named list of + permissions (defined by IAM or configured by users). + A binding can optionally specify a condition, which + is a logic expression that further constrains the + role binding based on attributes about the request + and/or target resource. + + **JSON Example** + + { + "bindings": [ + { + "role": + "roles/resourcemanager.organizationAdmin", + "members": [ "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + + }, { "role": + "roles/resourcemanager.organizationViewer", + "members": ["user:eve@example.com"], + "condition": { "title": "expirable access", + "description": "Does not grant access after + Sep 2020", "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", } } + + ] + + } + + **YAML Example** + + bindings: - members: - user:\ mike@example.com - + group:\ admins@example.com - domain:google.com - + serviceAccount:\ my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin - + members: - user:\ eve@example.com role: + roles/resourcemanager.organizationViewer + condition: title: expirable access description: + Does not grant access after Sep 2020 expression: + request.time < + timestamp('2020-10-01T00:00:00.000Z') + + For a description of IAM and its features, see the + [IAM developer's + guide](\ https://cloud.google.com/iam/docs). """ # Create or coerce a protobuf request object. @@ -1347,22 +1351,24 @@ def test_iam_permissions( Cloud Project. Otherwise returns an empty set of permissions. Args: - request (:class:`~.iam_policy.TestIamPermissionsRequest`): + request (google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest): The request object. Request message for `TestIamPermissions` method. - resource (:class:`str`): + resource (str): REQUIRED: The resource for which the policy detail is being requested. See the operation documentation for the appropriate value for this field. + This corresponds to the ``resource`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - permissions (:class:`Sequence[str]`): + permissions (Sequence[str]): The set of permissions to check for the ``resource``. Permissions with wildcards (such as '*' or 'storage.*') are not allowed. For more information see `IAM Overview `__. + This corresponds to the ``permissions`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1374,8 +1380,8 @@ def test_iam_permissions( sent along with the request as metadata. Returns: - ~.iam_policy.TestIamPermissionsResponse: - Response message for ``TestIamPermissions`` method. + google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse: + Response message for TestIamPermissions method. """ # Create or coerce a protobuf request object. # Sanity check: If we got a request object, we should *not* have diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/pagers.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/pagers.py index 0cb1ea3643..85e1823da5 100644 --- a/google/cloud/spanner_admin_instance_v1/services/instance_admin/pagers.py +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/pagers.py @@ -24,7 +24,7 @@ class ListInstanceConfigsPager: """A pager for iterating through ``list_instance_configs`` requests. This class thinly wraps an initial - :class:`~.spanner_instance_admin.ListInstanceConfigsResponse` object, and + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse` object, and provides an ``__iter__`` method to iterate through its ``instance_configs`` field. @@ -33,7 +33,7 @@ class ListInstanceConfigsPager: through the ``instance_configs`` field on the corresponding responses. - All the usual :class:`~.spanner_instance_admin.ListInstanceConfigsResponse` + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -51,9 +51,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.spanner_instance_admin.ListInstanceConfigsRequest`): + request (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsRequest): The initial request object. - response (:class:`~.spanner_instance_admin.ListInstanceConfigsResponse`): + response (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -86,7 +86,7 @@ class ListInstanceConfigsAsyncPager: """A pager for iterating through ``list_instance_configs`` requests. This class thinly wraps an initial - :class:`~.spanner_instance_admin.ListInstanceConfigsResponse` object, and + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse` object, and provides an ``__aiter__`` method to iterate through its ``instance_configs`` field. @@ -95,7 +95,7 @@ class ListInstanceConfigsAsyncPager: through the ``instance_configs`` field on the corresponding responses. - All the usual :class:`~.spanner_instance_admin.ListInstanceConfigsResponse` + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -115,9 +115,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.spanner_instance_admin.ListInstanceConfigsRequest`): + request (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsRequest): The initial request object. - response (:class:`~.spanner_instance_admin.ListInstanceConfigsResponse`): + response (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -156,7 +156,7 @@ class ListInstancesPager: """A pager for iterating through ``list_instances`` requests. This class thinly wraps an initial - :class:`~.spanner_instance_admin.ListInstancesResponse` object, and + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse` object, and provides an ``__iter__`` method to iterate through its ``instances`` field. @@ -165,7 +165,7 @@ class ListInstancesPager: through the ``instances`` field on the corresponding responses. - All the usual :class:`~.spanner_instance_admin.ListInstancesResponse` + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -183,9 +183,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.spanner_instance_admin.ListInstancesRequest`): + request (google.cloud.spanner_admin_instance_v1.types.ListInstancesRequest): The initial request object. - response (:class:`~.spanner_instance_admin.ListInstancesResponse`): + response (google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -218,7 +218,7 @@ class ListInstancesAsyncPager: """A pager for iterating through ``list_instances`` requests. This class thinly wraps an initial - :class:`~.spanner_instance_admin.ListInstancesResponse` object, and + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse` object, and provides an ``__aiter__`` method to iterate through its ``instances`` field. @@ -227,7 +227,7 @@ class ListInstancesAsyncPager: through the ``instances`` field on the corresponding responses. - All the usual :class:`~.spanner_instance_admin.ListInstancesResponse` + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -245,9 +245,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.spanner_instance_admin.ListInstancesRequest`): + request (google.cloud.spanner_admin_instance_v1.types.ListInstancesRequest): The initial request object. - response (:class:`~.spanner_instance_admin.ListInstancesResponse`): + response (google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc.py index aa827a3b75..e896249468 100644 --- a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc.py +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc.py @@ -82,6 +82,7 @@ def __init__( api_mtls_endpoint: str = None, client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, ssl_channel_credentials: grpc.ChannelCredentials = None, + client_cert_source_for_mtls: Callable[[], Tuple[bytes, bytes]] = None, quota_project_id: Optional[str] = None, client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, ) -> None: @@ -112,6 +113,10 @@ def __init__( ``api_mtls_endpoint`` is None. ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials for grpc channel. It is ignored if ``channel`` is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure mutual TLS channel. It is + ignored if ``channel`` or ``ssl_channel_credentials`` is provided. quota_project_id (Optional[str]): An optional project to use for billing and quota. client_info (google.api_core.gapic_v1.client_info.ClientInfo): @@ -128,6 +133,11 @@ def __init__( """ self._ssl_channel_credentials = ssl_channel_credentials + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + if channel: # Sanity check: Ensure that channel and credentials are not both # provided. @@ -137,11 +147,6 @@ def __init__( self._grpc_channel = channel self._ssl_channel_credentials = None elif api_mtls_endpoint: - warnings.warn( - "api_mtls_endpoint and client_cert_source are deprecated", - DeprecationWarning, - ) - host = ( api_mtls_endpoint if ":" in api_mtls_endpoint @@ -185,12 +190,18 @@ def __init__( scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id ) + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + # create a new channel. The provided one is ignored. self._grpc_channel = type(self).create_channel( host, credentials=credentials, credentials_file=credentials_file, - ssl_credentials=ssl_channel_credentials, + ssl_credentials=self._ssl_channel_credentials, scopes=scopes or self.AUTH_SCOPES, quota_project_id=quota_project_id, options=[ diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc_asyncio.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc_asyncio.py index a2d22c56f6..ca7f009071 100644 --- a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc_asyncio.py +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc_asyncio.py @@ -126,6 +126,7 @@ def __init__( api_mtls_endpoint: str = None, client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, ssl_channel_credentials: grpc.ChannelCredentials = None, + client_cert_source_for_mtls: Callable[[], Tuple[bytes, bytes]] = None, quota_project_id=None, client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, ) -> None: @@ -157,6 +158,10 @@ def __init__( ``api_mtls_endpoint`` is None. ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials for grpc channel. It is ignored if ``channel`` is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure mutual TLS channel. It is + ignored if ``channel`` or ``ssl_channel_credentials`` is provided. quota_project_id (Optional[str]): An optional project to use for billing and quota. client_info (google.api_core.gapic_v1.client_info.ClientInfo): @@ -173,6 +178,11 @@ def __init__( """ self._ssl_channel_credentials = ssl_channel_credentials + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + if channel: # Sanity check: Ensure that channel and credentials are not both # provided. @@ -182,11 +192,6 @@ def __init__( self._grpc_channel = channel self._ssl_channel_credentials = None elif api_mtls_endpoint: - warnings.warn( - "api_mtls_endpoint and client_cert_source are deprecated", - DeprecationWarning, - ) - host = ( api_mtls_endpoint if ":" in api_mtls_endpoint @@ -230,12 +235,18 @@ def __init__( scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id ) + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + # create a new channel. The provided one is ignored. self._grpc_channel = type(self).create_channel( host, credentials=credentials, credentials_file=credentials_file, - ssl_credentials=ssl_channel_credentials, + ssl_credentials=self._ssl_channel_credentials, scopes=scopes or self.AUTH_SCOPES, quota_project_id=quota_project_id, options=[ diff --git a/google/cloud/spanner_admin_instance_v1/types/spanner_instance_admin.py b/google/cloud/spanner_admin_instance_v1/types/spanner_instance_admin.py index cf2dc11a33..c5ffa63447 100644 --- a/google/cloud/spanner_admin_instance_v1/types/spanner_instance_admin.py +++ b/google/cloud/spanner_admin_instance_v1/types/spanner_instance_admin.py @@ -50,7 +50,7 @@ class ReplicaInfo(proto.Message): location (str): The location of the serving resources, e.g. "us-central1". - type_ (~.spanner_instance_admin.ReplicaInfo.ReplicaType): + type_ (google.cloud.spanner_admin_instance_v1.types.ReplicaInfo.ReplicaType): The type of replica. default_leader_location (bool): If true, this location is designated as the default leader @@ -90,7 +90,7 @@ class InstanceConfig(proto.Message): display_name (str): The name of this instance configuration as it appears in UIs. - replicas (Sequence[~.spanner_instance_admin.ReplicaInfo]): + replicas (Sequence[google.cloud.spanner_admin_instance_v1.types.ReplicaInfo]): The geographic placement of nodes in this instance configuration and their replication properties. @@ -136,13 +136,13 @@ class Instance(proto.Message): See `the documentation `__ for more information about nodes. - state (~.spanner_instance_admin.Instance.State): + state (google.cloud.spanner_admin_instance_v1.types.Instance.State): Output only. The current instance state. For [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance], the state must be either omitted or set to ``CREATING``. For [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance], the state must be either omitted or set to ``READY``. - labels (Sequence[~.spanner_instance_admin.Instance.LabelsEntry]): + labels (Sequence[google.cloud.spanner_admin_instance_v1.types.Instance.LabelsEntry]): Cloud Labels are a flexible and lightweight mechanism for organizing cloud resources into groups that reflect a customer's organizational needs and deployment strategies. @@ -228,7 +228,7 @@ class ListInstanceConfigsResponse(proto.Message): [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. Attributes: - instance_configs (Sequence[~.spanner_instance_admin.InstanceConfig]): + instance_configs (Sequence[google.cloud.spanner_admin_instance_v1.types.InstanceConfig]): The list of requested instance configurations. next_page_token (str): @@ -270,7 +270,7 @@ class GetInstanceRequest(proto.Message): name (str): Required. The name of the requested instance. Values are of the form ``projects//instances/``. - field_mask (~.gp_field_mask.FieldMask): + field_mask (google.protobuf.field_mask_pb2.FieldMask): If field_mask is present, specifies the subset of [Instance][google.spanner.admin.instance.v1.Instance] fields that should be returned. If absent, all @@ -295,7 +295,7 @@ class CreateInstanceRequest(proto.Message): Required. The ID of the instance to create. Valid identifiers are of the form ``[a-z][-a-z0-9]*[a-z0-9]`` and must be between 2 and 64 characters in length. - instance (~.spanner_instance_admin.Instance): + instance (google.cloud.spanner_admin_instance_v1.types.Instance): Required. The instance to create. The name may be omitted, but if specified must be ``/instances/``. @@ -364,7 +364,7 @@ class ListInstancesResponse(proto.Message): [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. Attributes: - instances (Sequence[~.spanner_instance_admin.Instance]): + instances (Sequence[google.cloud.spanner_admin_instance_v1.types.Instance]): The list of requested instances. next_page_token (str): ``next_page_token`` can be sent in a subsequent @@ -386,12 +386,12 @@ class UpdateInstanceRequest(proto.Message): [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. Attributes: - instance (~.spanner_instance_admin.Instance): + instance (google.cloud.spanner_admin_instance_v1.types.Instance): Required. The instance to update, which must always include the instance name. Otherwise, only fields mentioned in [field_mask][google.spanner.admin.instance.v1.UpdateInstanceRequest.field_mask] need be included. - field_mask (~.gp_field_mask.FieldMask): + field_mask (google.protobuf.field_mask_pb2.FieldMask): Required. A mask specifying which fields in [Instance][google.spanner.admin.instance.v1.Instance] should be updated. The field mask must always be specified; this @@ -424,18 +424,18 @@ class CreateInstanceMetadata(proto.Message): [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. Attributes: - instance (~.spanner_instance_admin.Instance): + instance (google.cloud.spanner_admin_instance_v1.types.Instance): The instance being created. - start_time (~.timestamp.Timestamp): + start_time (google.protobuf.timestamp_pb2.Timestamp): The time at which the [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance] request was received. - cancel_time (~.timestamp.Timestamp): + cancel_time (google.protobuf.timestamp_pb2.Timestamp): The time at which this operation was cancelled. If set, this operation is in the process of undoing itself (which is guaranteed to succeed) and cannot be cancelled again. - end_time (~.timestamp.Timestamp): + end_time (google.protobuf.timestamp_pb2.Timestamp): The time at which this operation failed or was completed successfully. """ @@ -454,18 +454,18 @@ class UpdateInstanceMetadata(proto.Message): [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. Attributes: - instance (~.spanner_instance_admin.Instance): + instance (google.cloud.spanner_admin_instance_v1.types.Instance): The desired end state of the update. - start_time (~.timestamp.Timestamp): + start_time (google.protobuf.timestamp_pb2.Timestamp): The time at which [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance] request was received. - cancel_time (~.timestamp.Timestamp): + cancel_time (google.protobuf.timestamp_pb2.Timestamp): The time at which this operation was cancelled. If set, this operation is in the process of undoing itself (which is guaranteed to succeed) and cannot be cancelled again. - end_time (~.timestamp.Timestamp): + end_time (google.protobuf.timestamp_pb2.Timestamp): The time at which this operation failed or was completed successfully. """ diff --git a/google/cloud/spanner_v1/proto/spanner.proto b/google/cloud/spanner_v1/proto/spanner.proto index 93e4987ed1..8f579e333d 100644 --- a/google/cloud/spanner_v1/proto/spanner.proto +++ b/google/cloud/spanner_v1/proto/spanner.proto @@ -20,6 +20,7 @@ import "google/api/annotations.proto"; import "google/api/client.proto"; import "google/api/field_behavior.proto"; import "google/api/resource.proto"; +import "google/protobuf/duration.proto"; import "google/protobuf/empty.proto"; import "google/protobuf/struct.proto"; import "google/protobuf/timestamp.proto"; @@ -219,6 +220,12 @@ service Spanner { // transactions. However, it can also happen for a variety of other // reasons. If `Commit` returns `ABORTED`, the caller should re-attempt // the transaction from the beginning, re-using the same session. + // + // On very rare occasions, `Commit` might return `UNKNOWN`. This can happen, + // for example, if the client job experiences a 1+ hour networking failure. + // At that point, Cloud Spanner has lost track of the transaction outcome and + // we recommend that you perform another read from the database to see the + // state of things as they are now. rpc Commit(CommitRequest) returns (CommitResponse) { option (google.api.http) = { post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:commit" @@ -331,9 +338,8 @@ message Session { pattern: "projects/{project}/instances/{instance}/databases/{database}/sessions/{session}" }; - // The name of the session. This is always system-assigned; values provided - // when creating a session are ignored. - string name = 1; + // Output only. The name of the session. This is always system-assigned. + string name = 1 [(google.api.field_behavior) = OUTPUT_ONLY]; // The labels for the session. // @@ -347,11 +353,11 @@ message Session { map labels = 2; // Output only. The timestamp when the session is created. - google.protobuf.Timestamp create_time = 3; + google.protobuf.Timestamp create_time = 3 [(google.api.field_behavior) = OUTPUT_ONLY]; // Output only. The approximate timestamp when the session is last used. It is // typically earlier than the actual last use time. - google.protobuf.Timestamp approximate_last_use_time = 4; + google.protobuf.Timestamp approximate_last_use_time = 4 [(google.api.field_behavior) = OUTPUT_ONLY]; } // The request for [GetSession][google.spanner.v1.Spanner.GetSession]. @@ -438,6 +444,9 @@ message ExecuteSqlRequest { // SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement // with an invalid optimizer version will fail with a syntax error // (`INVALID_ARGUMENT`) status. + // See + // https://cloud.google.com/spanner/docs/query-optimizer/manage-query-optimizer + // for more information on managing the query optimizer. // // The `optimizer_version` statement hint has precedence over this setting. string optimizer_version = 1; @@ -483,8 +492,9 @@ message ExecuteSqlRequest { // Parameter names and values that bind to placeholders in the SQL string. // // A parameter placeholder consists of the `@` character followed by the - // parameter name (for example, `@firstName`). Parameter names can contain - // letters, numbers, and underscores. + // parameter name (for example, `@firstName`). Parameter names must conform + // to the naming requirements of identifiers as specified at + // https://cloud.google.com/spanner/docs/lexical#identifiers. // // Parameters can appear anywhere that a literal value is expected. The same // parameter name can be used more than once, for example: @@ -884,12 +894,34 @@ message CommitRequest { // mutations are applied atomically, in the order they appear in // this list. repeated Mutation mutations = 4; + + // If `true`, then statistics related to the transaction will be included in + // the [CommitResponse][google.spanner.v1.CommitResponse.commit_stats]. Default value is + // `false`. + bool return_commit_stats = 5; } // The response for [Commit][google.spanner.v1.Spanner.Commit]. message CommitResponse { + // Additional statistics about a commit. + message CommitStats { + // The total number of mutations for the transaction. Knowing the + // `mutation_count` value can help you maximize the number of mutations + // in a transaction and minimize the number of API round trips. You can + // also monitor this value to prevent transactions from exceeding the system + // [limit](http://cloud.google.com/spanner/quotas#limits_for_creating_reading_updating_and_deleting_data). + // If the number of mutations exceeds the limit, the server returns + // [INVALID_ARGUMENT](http://cloud.google.com/spanner/docs/reference/rest/v1/Code#ENUM_VALUES.INVALID_ARGUMENT). + int64 mutation_count = 1; + } + // The Cloud Spanner timestamp at which the transaction committed. google.protobuf.Timestamp commit_timestamp = 1; + + // The statistics about this Commit. Not returned by default. + // For more information, see + // [CommitRequest.return_commit_stats][google.spanner.v1.CommitRequest.return_commit_stats]. + CommitStats commit_stats = 2; } // The request for [Rollback][google.spanner.v1.Spanner.Rollback]. diff --git a/google/cloud/spanner_v1/services/spanner/async_client.py b/google/cloud/spanner_v1/services/spanner/async_client.py index ab84b7d885..a4a188bc97 100644 --- a/google/cloud/spanner_v1/services/spanner/async_client.py +++ b/google/cloud/spanner_v1/services/spanner/async_client.py @@ -79,6 +79,7 @@ class SpannerAsyncClient: common_location_path = staticmethod(SpannerClient.common_location_path) parse_common_location_path = staticmethod(SpannerClient.parse_common_location_path) + from_service_account_info = SpannerClient.from_service_account_info from_service_account_file = SpannerClient.from_service_account_file from_service_account_json = from_service_account_file @@ -173,12 +174,13 @@ async def create_session( periodically, e.g., ``"SELECT 1"``. Args: - request (:class:`~.spanner.CreateSessionRequest`): + request (:class:`google.cloud.spanner_v1.types.CreateSessionRequest`): The request object. The request for [CreateSession][google.spanner.v1.Spanner.CreateSession]. database (:class:`str`): Required. The database in which the new session is created. + This corresponds to the ``database`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -190,7 +192,7 @@ async def create_session( sent along with the request as metadata. Returns: - ~.spanner.Session: + google.cloud.spanner_v1.types.Session: A session in the Cloud Spanner API. """ # Create or coerce a protobuf request object. @@ -253,12 +255,13 @@ async def batch_create_sessions( practices on session cache management. Args: - request (:class:`~.spanner.BatchCreateSessionsRequest`): + request (:class:`google.cloud.spanner_v1.types.BatchCreateSessionsRequest`): The request object. The request for [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. database (:class:`str`): Required. The database in which the new sessions are created. + This corresponds to the ``database`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -270,6 +273,7 @@ async def batch_create_sessions( BatchCreateSessions (adjusting [session_count][google.spanner.v1.BatchCreateSessionsRequest.session_count] as necessary). + This corresponds to the ``session_count`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -281,7 +285,7 @@ async def batch_create_sessions( sent along with the request as metadata. Returns: - ~.spanner.BatchCreateSessionsResponse: + google.cloud.spanner_v1.types.BatchCreateSessionsResponse: The response for [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. @@ -346,12 +350,13 @@ async def get_session( is still alive. Args: - request (:class:`~.spanner.GetSessionRequest`): + request (:class:`google.cloud.spanner_v1.types.GetSessionRequest`): The request object. The request for [GetSession][google.spanner.v1.Spanner.GetSession]. name (:class:`str`): Required. The name of the session to retrieve. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -363,7 +368,7 @@ async def get_session( sent along with the request as metadata. Returns: - ~.spanner.Session: + google.cloud.spanner_v1.types.Session: A session in the Cloud Spanner API. """ # Create or coerce a protobuf request object. @@ -422,12 +427,13 @@ async def list_sessions( r"""Lists all sessions in a given database. Args: - request (:class:`~.spanner.ListSessionsRequest`): + request (:class:`google.cloud.spanner_v1.types.ListSessionsRequest`): The request object. The request for [ListSessions][google.spanner.v1.Spanner.ListSessions]. database (:class:`str`): Required. The database in which to list sessions. + This corresponds to the ``database`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -439,7 +445,7 @@ async def list_sessions( sent along with the request as metadata. Returns: - ~.pagers.ListSessionsAsyncPager: + google.cloud.spanner_v1.services.spanner.pagers.ListSessionsAsyncPager: The response for [ListSessions][google.spanner.v1.Spanner.ListSessions]. @@ -511,12 +517,13 @@ async def delete_session( of any operations that are running with this session. Args: - request (:class:`~.spanner.DeleteSessionRequest`): + request (:class:`google.cloud.spanner_v1.types.DeleteSessionRequest`): The request object. The request for [DeleteSession][google.spanner.v1.Spanner.DeleteSession]. name (:class:`str`): Required. The name of the session to delete. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -594,7 +601,7 @@ async def execute_sql( instead. Args: - request (:class:`~.spanner.ExecuteSqlRequest`): + request (:class:`google.cloud.spanner_v1.types.ExecuteSqlRequest`): The request object. The request for [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. @@ -606,9 +613,9 @@ async def execute_sql( sent along with the request as metadata. Returns: - ~.result_set.ResultSet: + google.cloud.spanner_v1.types.ResultSet: Results from [Read][google.spanner.v1.Spanner.Read] or - [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. """ # Create or coerce a protobuf request object. @@ -657,7 +664,7 @@ def execute_streaming_sql( column value can exceed 10 MiB. Args: - request (:class:`~.spanner.ExecuteSqlRequest`): + request (:class:`google.cloud.spanner_v1.types.ExecuteSqlRequest`): The request object. The request for [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. @@ -669,7 +676,7 @@ def execute_streaming_sql( sent along with the request as metadata. Returns: - AsyncIterable[~.result_set.PartialResultSet]: + AsyncIterable[google.cloud.spanner_v1.types.PartialResultSet]: Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result @@ -725,7 +732,7 @@ async def execute_batch_dml( statements are not executed. Args: - request (:class:`~.spanner.ExecuteBatchDmlRequest`): + request (:class:`google.cloud.spanner_v1.types.ExecuteBatchDmlRequest`): The request object. The request for [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. @@ -736,45 +743,46 @@ async def execute_batch_dml( sent along with the request as metadata. Returns: - ~.spanner.ExecuteBatchDmlResponse: - The response for - [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. - Contains a list of - [ResultSet][google.spanner.v1.ResultSet] messages, one - for each DML statement that has successfully executed, - in the same order as the statements in the request. If a - statement fails, the status in the response body - identifies the cause of the failure. - - To check for DML statements that failed, use the - following approach: - - 1. Check the status in the response message. The - [google.rpc.Code][google.rpc.Code] enum value ``OK`` - indicates that all statements were executed - successfully. - 2. If the status was not ``OK``, check the number of - result sets in the response. If the response contains - ``N`` [ResultSet][google.spanner.v1.ResultSet] - messages, then statement ``N+1`` in the request - failed. - - Example 1: - - - Request: 5 DML statements, all executed successfully. - - Response: 5 [ResultSet][google.spanner.v1.ResultSet] - messages, with the status ``OK``. - - Example 2: - - - Request: 5 DML statements. The third statement has a - syntax error. - - Response: 2 [ResultSet][google.spanner.v1.ResultSet] - messages, and a syntax error (``INVALID_ARGUMENT``) - status. The number of - [ResultSet][google.spanner.v1.ResultSet] messages - indicates that the third statement failed, and the - fourth and fifth statements were not executed. + google.cloud.spanner_v1.types.ExecuteBatchDmlResponse: + The response for [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. Contains a list + of [ResultSet][google.spanner.v1.ResultSet] messages, + one for each DML statement that has successfully + executed, in the same order as the statements in the + request. If a statement fails, the status in the + response body identifies the cause of the failure. + + To check for DML statements that failed, use the + following approach: + + 1. Check the status in the response message. The + [google.rpc.Code][google.rpc.Code] enum value OK + indicates that all statements were executed + successfully. + 2. If the status was not OK, check the number of + result sets in the response. If the response + contains N + [ResultSet][google.spanner.v1.ResultSet] messages, + then statement N+1 in the request failed. + + Example 1: + + - Request: 5 DML statements, all executed + successfully. + - Response: 5 + [ResultSet][google.spanner.v1.ResultSet] messages, + with the status OK. + + Example 2: + + - Request: 5 DML statements. The third statement has + a syntax error. + - Response: 2 + [ResultSet][google.spanner.v1.ResultSet] messages, + and a syntax error (INVALID_ARGUMENT) status. The + number of [ResultSet][google.spanner.v1.ResultSet] + messages indicates that the third statement + failed, and the fourth and fifth statements were + not executed. """ # Create or coerce a protobuf request object. @@ -832,7 +840,7 @@ async def read( instead. Args: - request (:class:`~.spanner.ReadRequest`): + request (:class:`google.cloud.spanner_v1.types.ReadRequest`): The request object. The request for [Read][google.spanner.v1.Spanner.Read] and [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. @@ -844,9 +852,9 @@ async def read( sent along with the request as metadata. Returns: - ~.result_set.ResultSet: + google.cloud.spanner_v1.types.ResultSet: Results from [Read][google.spanner.v1.Spanner.Read] or - [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. """ # Create or coerce a protobuf request object. @@ -895,7 +903,7 @@ def streaming_read( exceed 10 MiB. Args: - request (:class:`~.spanner.ReadRequest`): + request (:class:`google.cloud.spanner_v1.types.ReadRequest`): The request object. The request for [Read][google.spanner.v1.Spanner.Read] and [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. @@ -907,7 +915,7 @@ def streaming_read( sent along with the request as metadata. Returns: - AsyncIterable[~.result_set.PartialResultSet]: + AsyncIterable[google.cloud.spanner_v1.types.PartialResultSet]: Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result @@ -956,18 +964,20 @@ async def begin_transaction( transaction as a side-effect. Args: - request (:class:`~.spanner.BeginTransactionRequest`): + request (:class:`google.cloud.spanner_v1.types.BeginTransactionRequest`): The request object. The request for [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]. session (:class:`str`): Required. The session in which the transaction runs. + This corresponds to the ``session`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - options (:class:`~.transaction.TransactionOptions`): + options (:class:`google.cloud.spanner_v1.types.TransactionOptions`): Required. Options for the new transaction. + This corresponds to the ``options`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -979,7 +989,7 @@ async def begin_transaction( sent along with the request as metadata. Returns: - ~.transaction.Transaction: + google.cloud.spanner_v1.types.Transaction: A transaction. """ # Create or coerce a protobuf request object. @@ -1050,31 +1060,41 @@ async def commit( re-attempt the transaction from the beginning, re-using the same session. + On very rare occasions, ``Commit`` might return ``UNKNOWN``. + This can happen, for example, if the client job experiences a 1+ + hour networking failure. At that point, Cloud Spanner has lost + track of the transaction outcome and we recommend that you + perform another read from the database to see the state of + things as they are now. + Args: - request (:class:`~.spanner.CommitRequest`): + request (:class:`google.cloud.spanner_v1.types.CommitRequest`): The request object. The request for [Commit][google.spanner.v1.Spanner.Commit]. session (:class:`str`): Required. The session in which the transaction to be committed is running. + This corresponds to the ``session`` field on the ``request`` instance; if ``request`` is provided, this should not be set. transaction_id (:class:`bytes`): Commit a previously-started transaction. + This corresponds to the ``transaction_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - mutations (:class:`Sequence[~.mutation.Mutation]`): + mutations (:class:`Sequence[google.cloud.spanner_v1.types.Mutation]`): The mutations to be executed when this transaction commits. All mutations are applied atomically, in the order they appear in this list. + This corresponds to the ``mutations`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - single_use_transaction (:class:`~.transaction.TransactionOptions`): + single_use_transaction (:class:`google.cloud.spanner_v1.types.TransactionOptions`): Execute mutations in a temporary transaction. Note that unlike commit of a previously-started transaction, commit with a temporary transaction is non-idempotent. @@ -1085,6 +1105,7 @@ async def commit( If this is undesirable, use [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction] and [Commit][google.spanner.v1.Spanner.Commit] instead. + This corresponds to the ``single_use_transaction`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1096,7 +1117,7 @@ async def commit( sent along with the request as metadata. Returns: - ~.spanner.CommitResponse: + google.cloud.spanner_v1.types.CommitResponse: The response for [Commit][google.spanner.v1.Spanner.Commit]. @@ -1176,18 +1197,20 @@ async def rollback( ``ABORTED``. Args: - request (:class:`~.spanner.RollbackRequest`): + request (:class:`google.cloud.spanner_v1.types.RollbackRequest`): The request object. The request for [Rollback][google.spanner.v1.Spanner.Rollback]. session (:class:`str`): Required. The session in which the transaction to roll back is running. + This corresponds to the ``session`` field on the ``request`` instance; if ``request`` is provided, this should not be set. transaction_id (:class:`bytes`): Required. The transaction to roll back. + This corresponds to the ``transaction_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1267,7 +1290,7 @@ async def partition_query( from the beginning. Args: - request (:class:`~.spanner.PartitionQueryRequest`): + request (:class:`google.cloud.spanner_v1.types.PartitionQueryRequest`): The request object. The request for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] @@ -1278,11 +1301,10 @@ async def partition_query( sent along with the request as metadata. Returns: - ~.spanner.PartitionResponse: - The response for - [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] - or - [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + google.cloud.spanner_v1.types.PartitionResponse: + The response for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] """ # Create or coerce a protobuf request object. @@ -1342,7 +1364,7 @@ async def partition_read( from the beginning. Args: - request (:class:`~.spanner.PartitionReadRequest`): + request (:class:`google.cloud.spanner_v1.types.PartitionReadRequest`): The request object. The request for [PartitionRead][google.spanner.v1.Spanner.PartitionRead] @@ -1353,11 +1375,10 @@ async def partition_read( sent along with the request as metadata. Returns: - ~.spanner.PartitionResponse: - The response for - [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] - or - [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + google.cloud.spanner_v1.types.PartitionResponse: + The response for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] """ # Create or coerce a protobuf request object. diff --git a/google/cloud/spanner_v1/services/spanner/client.py b/google/cloud/spanner_v1/services/spanner/client.py index 50e4792b76..691543a984 100644 --- a/google/cloud/spanner_v1/services/spanner/client.py +++ b/google/cloud/spanner_v1/services/spanner/client.py @@ -117,6 +117,22 @@ def _get_default_mtls_endpoint(api_endpoint): DEFAULT_ENDPOINT ) + @classmethod + def from_service_account_info(cls, info: dict, *args, **kwargs): + """Creates an instance of this client using the provided credentials info. + + Args: + info (dict): The service account private key info. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + SpannerClient: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_info(info) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + @classmethod def from_service_account_file(cls, filename: str, *args, **kwargs): """Creates an instance of this client using the provided credentials @@ -129,7 +145,7 @@ def from_service_account_file(cls, filename: str, *args, **kwargs): kwargs: Additional arguments to pass to the constructor. Returns: - {@api.name}: The constructed client. + SpannerClient: The constructed client. """ credentials = service_account.Credentials.from_service_account_file(filename) kwargs["credentials"] = credentials @@ -253,10 +269,10 @@ def __init__( credentials identify the application to the service; if none are specified, the client will attempt to ascertain the credentials from the environment. - transport (Union[str, ~.SpannerTransport]): The + transport (Union[str, SpannerTransport]): The transport to use. If set to None, a transport is chosen automatically. - client_options (client_options_lib.ClientOptions): Custom options for the + client_options (google.api_core.client_options.ClientOptions): Custom options for the client. It won't take effect if a ``transport`` instance is provided. (1) The ``api_endpoint`` property can be used to override the default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT @@ -292,21 +308,17 @@ def __init__( util.strtobool(os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false")) ) - ssl_credentials = None + client_cert_source_func = None is_mtls = False if use_client_cert: if client_options.client_cert_source: - import grpc # type: ignore - - cert, key = client_options.client_cert_source() - ssl_credentials = grpc.ssl_channel_credentials( - certificate_chain=cert, private_key=key - ) is_mtls = True + client_cert_source_func = client_options.client_cert_source else: - creds = SslCredentials() - is_mtls = creds.is_mtls - ssl_credentials = creds.ssl_credentials if is_mtls else None + is_mtls = mtls.has_default_client_cert_source() + client_cert_source_func = ( + mtls.default_client_cert_source() if is_mtls else None + ) # Figure out which api endpoint to use. if client_options.api_endpoint is not None: @@ -349,7 +361,7 @@ def __init__( credentials_file=client_options.credentials_file, host=api_endpoint, scopes=client_options.scopes, - ssl_channel_credentials=ssl_credentials, + client_cert_source_for_mtls=client_cert_source_func, quota_project_id=client_options.quota_project_id, client_info=client_info, ) @@ -384,12 +396,13 @@ def create_session( periodically, e.g., ``"SELECT 1"``. Args: - request (:class:`~.spanner.CreateSessionRequest`): + request (google.cloud.spanner_v1.types.CreateSessionRequest): The request object. The request for [CreateSession][google.spanner.v1.Spanner.CreateSession]. - database (:class:`str`): + database (str): Required. The database in which the new session is created. + This corresponds to the ``database`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -401,7 +414,7 @@ def create_session( sent along with the request as metadata. Returns: - ~.spanner.Session: + google.cloud.spanner_v1.types.Session: A session in the Cloud Spanner API. """ # Create or coerce a protobuf request object. @@ -459,16 +472,17 @@ def batch_create_sessions( practices on session cache management. Args: - request (:class:`~.spanner.BatchCreateSessionsRequest`): + request (google.cloud.spanner_v1.types.BatchCreateSessionsRequest): The request object. The request for [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. - database (:class:`str`): + database (str): Required. The database in which the new sessions are created. + This corresponds to the ``database`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - session_count (:class:`int`): + session_count (int): Required. The number of sessions to be created in this batch call. The API may return fewer than the requested number of sessions. If a specific number of sessions are @@ -476,6 +490,7 @@ def batch_create_sessions( BatchCreateSessions (adjusting [session_count][google.spanner.v1.BatchCreateSessionsRequest.session_count] as necessary). + This corresponds to the ``session_count`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -487,7 +502,7 @@ def batch_create_sessions( sent along with the request as metadata. Returns: - ~.spanner.BatchCreateSessionsResponse: + google.cloud.spanner_v1.types.BatchCreateSessionsResponse: The response for [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. @@ -547,12 +562,13 @@ def get_session( is still alive. Args: - request (:class:`~.spanner.GetSessionRequest`): + request (google.cloud.spanner_v1.types.GetSessionRequest): The request object. The request for [GetSession][google.spanner.v1.Spanner.GetSession]. - name (:class:`str`): + name (str): Required. The name of the session to retrieve. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -564,7 +580,7 @@ def get_session( sent along with the request as metadata. Returns: - ~.spanner.Session: + google.cloud.spanner_v1.types.Session: A session in the Cloud Spanner API. """ # Create or coerce a protobuf request object. @@ -618,12 +634,13 @@ def list_sessions( r"""Lists all sessions in a given database. Args: - request (:class:`~.spanner.ListSessionsRequest`): + request (google.cloud.spanner_v1.types.ListSessionsRequest): The request object. The request for [ListSessions][google.spanner.v1.Spanner.ListSessions]. - database (:class:`str`): + database (str): Required. The database in which to list sessions. + This corresponds to the ``database`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -635,7 +652,7 @@ def list_sessions( sent along with the request as metadata. Returns: - ~.pagers.ListSessionsPager: + google.cloud.spanner_v1.services.spanner.pagers.ListSessionsPager: The response for [ListSessions][google.spanner.v1.Spanner.ListSessions]. @@ -702,12 +719,13 @@ def delete_session( of any operations that are running with this session. Args: - request (:class:`~.spanner.DeleteSessionRequest`): + request (google.cloud.spanner_v1.types.DeleteSessionRequest): The request object. The request for [DeleteSession][google.spanner.v1.Spanner.DeleteSession]. - name (:class:`str`): + name (str): Required. The name of the session to delete. + This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -780,7 +798,7 @@ def execute_sql( instead. Args: - request (:class:`~.spanner.ExecuteSqlRequest`): + request (google.cloud.spanner_v1.types.ExecuteSqlRequest): The request object. The request for [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. @@ -792,9 +810,9 @@ def execute_sql( sent along with the request as metadata. Returns: - ~.result_set.ResultSet: + google.cloud.spanner_v1.types.ResultSet: Results from [Read][google.spanner.v1.Spanner.Read] or - [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. """ # Create or coerce a protobuf request object. @@ -838,7 +856,7 @@ def execute_streaming_sql( column value can exceed 10 MiB. Args: - request (:class:`~.spanner.ExecuteSqlRequest`): + request (google.cloud.spanner_v1.types.ExecuteSqlRequest): The request object. The request for [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. @@ -850,7 +868,7 @@ def execute_streaming_sql( sent along with the request as metadata. Returns: - Iterable[~.result_set.PartialResultSet]: + Iterable[google.cloud.spanner_v1.types.PartialResultSet]: Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result @@ -907,7 +925,7 @@ def execute_batch_dml( statements are not executed. Args: - request (:class:`~.spanner.ExecuteBatchDmlRequest`): + request (google.cloud.spanner_v1.types.ExecuteBatchDmlRequest): The request object. The request for [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. @@ -918,45 +936,46 @@ def execute_batch_dml( sent along with the request as metadata. Returns: - ~.spanner.ExecuteBatchDmlResponse: - The response for - [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. - Contains a list of - [ResultSet][google.spanner.v1.ResultSet] messages, one - for each DML statement that has successfully executed, - in the same order as the statements in the request. If a - statement fails, the status in the response body - identifies the cause of the failure. - - To check for DML statements that failed, use the - following approach: - - 1. Check the status in the response message. The - [google.rpc.Code][google.rpc.Code] enum value ``OK`` - indicates that all statements were executed - successfully. - 2. If the status was not ``OK``, check the number of - result sets in the response. If the response contains - ``N`` [ResultSet][google.spanner.v1.ResultSet] - messages, then statement ``N+1`` in the request - failed. - - Example 1: - - - Request: 5 DML statements, all executed successfully. - - Response: 5 [ResultSet][google.spanner.v1.ResultSet] - messages, with the status ``OK``. - - Example 2: - - - Request: 5 DML statements. The third statement has a - syntax error. - - Response: 2 [ResultSet][google.spanner.v1.ResultSet] - messages, and a syntax error (``INVALID_ARGUMENT``) - status. The number of - [ResultSet][google.spanner.v1.ResultSet] messages - indicates that the third statement failed, and the - fourth and fifth statements were not executed. + google.cloud.spanner_v1.types.ExecuteBatchDmlResponse: + The response for [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. Contains a list + of [ResultSet][google.spanner.v1.ResultSet] messages, + one for each DML statement that has successfully + executed, in the same order as the statements in the + request. If a statement fails, the status in the + response body identifies the cause of the failure. + + To check for DML statements that failed, use the + following approach: + + 1. Check the status in the response message. The + [google.rpc.Code][google.rpc.Code] enum value OK + indicates that all statements were executed + successfully. + 2. If the status was not OK, check the number of + result sets in the response. If the response + contains N + [ResultSet][google.spanner.v1.ResultSet] messages, + then statement N+1 in the request failed. + + Example 1: + + - Request: 5 DML statements, all executed + successfully. + - Response: 5 + [ResultSet][google.spanner.v1.ResultSet] messages, + with the status OK. + + Example 2: + + - Request: 5 DML statements. The third statement has + a syntax error. + - Response: 2 + [ResultSet][google.spanner.v1.ResultSet] messages, + and a syntax error (INVALID_ARGUMENT) status. The + number of [ResultSet][google.spanner.v1.ResultSet] + messages indicates that the third statement + failed, and the fourth and fifth statements were + not executed. """ # Create or coerce a protobuf request object. @@ -1009,7 +1028,7 @@ def read( instead. Args: - request (:class:`~.spanner.ReadRequest`): + request (google.cloud.spanner_v1.types.ReadRequest): The request object. The request for [Read][google.spanner.v1.Spanner.Read] and [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. @@ -1021,9 +1040,9 @@ def read( sent along with the request as metadata. Returns: - ~.result_set.ResultSet: + google.cloud.spanner_v1.types.ResultSet: Results from [Read][google.spanner.v1.Spanner.Read] or - [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. """ # Create or coerce a protobuf request object. @@ -1067,7 +1086,7 @@ def streaming_read( exceed 10 MiB. Args: - request (:class:`~.spanner.ReadRequest`): + request (google.cloud.spanner_v1.types.ReadRequest): The request object. The request for [Read][google.spanner.v1.Spanner.Read] and [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. @@ -1079,7 +1098,7 @@ def streaming_read( sent along with the request as metadata. Returns: - Iterable[~.result_set.PartialResultSet]: + Iterable[google.cloud.spanner_v1.types.PartialResultSet]: Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result @@ -1129,18 +1148,20 @@ def begin_transaction( transaction as a side-effect. Args: - request (:class:`~.spanner.BeginTransactionRequest`): + request (google.cloud.spanner_v1.types.BeginTransactionRequest): The request object. The request for [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]. - session (:class:`str`): + session (str): Required. The session in which the transaction runs. + This corresponds to the ``session`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - options (:class:`~.transaction.TransactionOptions`): + options (google.cloud.spanner_v1.types.TransactionOptions): Required. Options for the new transaction. + This corresponds to the ``options`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1152,7 +1173,7 @@ def begin_transaction( sent along with the request as metadata. Returns: - ~.transaction.Transaction: + google.cloud.spanner_v1.types.Transaction: A transaction. """ # Create or coerce a protobuf request object. @@ -1218,31 +1239,41 @@ def commit( re-attempt the transaction from the beginning, re-using the same session. + On very rare occasions, ``Commit`` might return ``UNKNOWN``. + This can happen, for example, if the client job experiences a 1+ + hour networking failure. At that point, Cloud Spanner has lost + track of the transaction outcome and we recommend that you + perform another read from the database to see the state of + things as they are now. + Args: - request (:class:`~.spanner.CommitRequest`): + request (google.cloud.spanner_v1.types.CommitRequest): The request object. The request for [Commit][google.spanner.v1.Spanner.Commit]. - session (:class:`str`): + session (str): Required. The session in which the transaction to be committed is running. + This corresponds to the ``session`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - transaction_id (:class:`bytes`): + transaction_id (bytes): Commit a previously-started transaction. + This corresponds to the ``transaction_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - mutations (:class:`Sequence[~.mutation.Mutation]`): + mutations (Sequence[google.cloud.spanner_v1.types.Mutation]): The mutations to be executed when this transaction commits. All mutations are applied atomically, in the order they appear in this list. + This corresponds to the ``mutations`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - single_use_transaction (:class:`~.transaction.TransactionOptions`): + single_use_transaction (google.cloud.spanner_v1.types.TransactionOptions): Execute mutations in a temporary transaction. Note that unlike commit of a previously-started transaction, commit with a temporary transaction is non-idempotent. @@ -1253,6 +1284,7 @@ def commit( If this is undesirable, use [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction] and [Commit][google.spanner.v1.Spanner.Commit] instead. + This corresponds to the ``single_use_transaction`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1264,7 +1296,7 @@ def commit( sent along with the request as metadata. Returns: - ~.spanner.CommitResponse: + google.cloud.spanner_v1.types.CommitResponse: The response for [Commit][google.spanner.v1.Spanner.Commit]. @@ -1339,18 +1371,20 @@ def rollback( ``ABORTED``. Args: - request (:class:`~.spanner.RollbackRequest`): + request (google.cloud.spanner_v1.types.RollbackRequest): The request object. The request for [Rollback][google.spanner.v1.Spanner.Rollback]. - session (:class:`str`): + session (str): Required. The session in which the transaction to roll back is running. + This corresponds to the ``session`` field on the ``request`` instance; if ``request`` is provided, this should not be set. - transaction_id (:class:`bytes`): + transaction_id (bytes): Required. The transaction to roll back. + This corresponds to the ``transaction_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. @@ -1425,7 +1459,7 @@ def partition_query( from the beginning. Args: - request (:class:`~.spanner.PartitionQueryRequest`): + request (google.cloud.spanner_v1.types.PartitionQueryRequest): The request object. The request for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] @@ -1436,11 +1470,10 @@ def partition_query( sent along with the request as metadata. Returns: - ~.spanner.PartitionResponse: - The response for - [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] - or - [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + google.cloud.spanner_v1.types.PartitionResponse: + The response for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] """ # Create or coerce a protobuf request object. @@ -1495,7 +1528,7 @@ def partition_read( from the beginning. Args: - request (:class:`~.spanner.PartitionReadRequest`): + request (google.cloud.spanner_v1.types.PartitionReadRequest): The request object. The request for [PartitionRead][google.spanner.v1.Spanner.PartitionRead] @@ -1506,11 +1539,10 @@ def partition_read( sent along with the request as metadata. Returns: - ~.spanner.PartitionResponse: - The response for - [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] - or - [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + google.cloud.spanner_v1.types.PartitionResponse: + The response for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] """ # Create or coerce a protobuf request object. diff --git a/google/cloud/spanner_v1/services/spanner/pagers.py b/google/cloud/spanner_v1/services/spanner/pagers.py index aff1cf533e..e98fda11c7 100644 --- a/google/cloud/spanner_v1/services/spanner/pagers.py +++ b/google/cloud/spanner_v1/services/spanner/pagers.py @@ -24,7 +24,7 @@ class ListSessionsPager: """A pager for iterating through ``list_sessions`` requests. This class thinly wraps an initial - :class:`~.spanner.ListSessionsResponse` object, and + :class:`google.cloud.spanner_v1.types.ListSessionsResponse` object, and provides an ``__iter__`` method to iterate through its ``sessions`` field. @@ -33,7 +33,7 @@ class ListSessionsPager: through the ``sessions`` field on the corresponding responses. - All the usual :class:`~.spanner.ListSessionsResponse` + All the usual :class:`google.cloud.spanner_v1.types.ListSessionsResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -51,9 +51,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.spanner.ListSessionsRequest`): + request (google.cloud.spanner_v1.types.ListSessionsRequest): The initial request object. - response (:class:`~.spanner.ListSessionsResponse`): + response (google.cloud.spanner_v1.types.ListSessionsResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. @@ -86,7 +86,7 @@ class ListSessionsAsyncPager: """A pager for iterating through ``list_sessions`` requests. This class thinly wraps an initial - :class:`~.spanner.ListSessionsResponse` object, and + :class:`google.cloud.spanner_v1.types.ListSessionsResponse` object, and provides an ``__aiter__`` method to iterate through its ``sessions`` field. @@ -95,7 +95,7 @@ class ListSessionsAsyncPager: through the ``sessions`` field on the corresponding responses. - All the usual :class:`~.spanner.ListSessionsResponse` + All the usual :class:`google.cloud.spanner_v1.types.ListSessionsResponse` attributes are available on the pager. If multiple requests are made, only the most recent response is retained, and thus used for attribute lookup. """ @@ -113,9 +113,9 @@ def __init__( Args: method (Callable): The method that was originally called, and which instantiated this pager. - request (:class:`~.spanner.ListSessionsRequest`): + request (google.cloud.spanner_v1.types.ListSessionsRequest): The initial request object. - response (:class:`~.spanner.ListSessionsResponse`): + response (google.cloud.spanner_v1.types.ListSessionsResponse): The initial response object. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. diff --git a/google/cloud/spanner_v1/services/spanner/transports/grpc.py b/google/cloud/spanner_v1/services/spanner/transports/grpc.py index d1688acb92..2ac10fc5b3 100644 --- a/google/cloud/spanner_v1/services/spanner/transports/grpc.py +++ b/google/cloud/spanner_v1/services/spanner/transports/grpc.py @@ -62,6 +62,7 @@ def __init__( api_mtls_endpoint: str = None, client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, ssl_channel_credentials: grpc.ChannelCredentials = None, + client_cert_source_for_mtls: Callable[[], Tuple[bytes, bytes]] = None, quota_project_id: Optional[str] = None, client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, ) -> None: @@ -92,6 +93,10 @@ def __init__( ``api_mtls_endpoint`` is None. ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials for grpc channel. It is ignored if ``channel`` is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure mutual TLS channel. It is + ignored if ``channel`` or ``ssl_channel_credentials`` is provided. quota_project_id (Optional[str]): An optional project to use for billing and quota. client_info (google.api_core.gapic_v1.client_info.ClientInfo): @@ -108,6 +113,11 @@ def __init__( """ self._ssl_channel_credentials = ssl_channel_credentials + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + if channel: # Sanity check: Ensure that channel and credentials are not both # provided. @@ -117,11 +127,6 @@ def __init__( self._grpc_channel = channel self._ssl_channel_credentials = None elif api_mtls_endpoint: - warnings.warn( - "api_mtls_endpoint and client_cert_source are deprecated", - DeprecationWarning, - ) - host = ( api_mtls_endpoint if ":" in api_mtls_endpoint @@ -165,12 +170,18 @@ def __init__( scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id ) + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + # create a new channel. The provided one is ignored. self._grpc_channel = type(self).create_channel( host, credentials=credentials, credentials_file=credentials_file, - ssl_credentials=ssl_channel_credentials, + ssl_credentials=self._ssl_channel_credentials, scopes=scopes or self.AUTH_SCOPES, quota_project_id=quota_project_id, options=[ @@ -617,6 +628,13 @@ def commit(self) -> Callable[[spanner.CommitRequest], spanner.CommitResponse]: re-attempt the transaction from the beginning, re-using the same session. + On very rare occasions, ``Commit`` might return ``UNKNOWN``. + This can happen, for example, if the client job experiences a 1+ + hour networking failure. At that point, Cloud Spanner has lost + track of the transaction outcome and we recommend that you + perform another read from the database to see the state of + things as they are now. + Returns: Callable[[~.CommitRequest], ~.CommitResponse]: diff --git a/google/cloud/spanner_v1/services/spanner/transports/grpc_asyncio.py b/google/cloud/spanner_v1/services/spanner/transports/grpc_asyncio.py index 422c51ef6f..265f4bb30a 100644 --- a/google/cloud/spanner_v1/services/spanner/transports/grpc_asyncio.py +++ b/google/cloud/spanner_v1/services/spanner/transports/grpc_asyncio.py @@ -106,6 +106,7 @@ def __init__( api_mtls_endpoint: str = None, client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, ssl_channel_credentials: grpc.ChannelCredentials = None, + client_cert_source_for_mtls: Callable[[], Tuple[bytes, bytes]] = None, quota_project_id=None, client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, ) -> None: @@ -137,6 +138,10 @@ def __init__( ``api_mtls_endpoint`` is None. ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials for grpc channel. It is ignored if ``channel`` is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure mutual TLS channel. It is + ignored if ``channel`` or ``ssl_channel_credentials`` is provided. quota_project_id (Optional[str]): An optional project to use for billing and quota. client_info (google.api_core.gapic_v1.client_info.ClientInfo): @@ -153,6 +158,11 @@ def __init__( """ self._ssl_channel_credentials = ssl_channel_credentials + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + if channel: # Sanity check: Ensure that channel and credentials are not both # provided. @@ -162,11 +172,6 @@ def __init__( self._grpc_channel = channel self._ssl_channel_credentials = None elif api_mtls_endpoint: - warnings.warn( - "api_mtls_endpoint and client_cert_source are deprecated", - DeprecationWarning, - ) - host = ( api_mtls_endpoint if ":" in api_mtls_endpoint @@ -210,12 +215,18 @@ def __init__( scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id ) + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + # create a new channel. The provided one is ignored. self._grpc_channel = type(self).create_channel( host, credentials=credentials, credentials_file=credentials_file, - ssl_credentials=ssl_channel_credentials, + ssl_credentials=self._ssl_channel_credentials, scopes=scopes or self.AUTH_SCOPES, quota_project_id=quota_project_id, options=[ @@ -634,6 +645,13 @@ def commit( re-attempt the transaction from the beginning, re-using the same session. + On very rare occasions, ``Commit`` might return ``UNKNOWN``. + This can happen, for example, if the client job experiences a 1+ + hour networking failure. At that point, Cloud Spanner has lost + track of the transaction outcome and we recommend that you + perform another read from the database to see the state of + things as they are now. + Returns: Callable[[~.CommitRequest], Awaitable[~.CommitResponse]]: diff --git a/google/cloud/spanner_v1/types/keys.py b/google/cloud/spanner_v1/types/keys.py index 342d14829c..fc5e20315b 100644 --- a/google/cloud/spanner_v1/types/keys.py +++ b/google/cloud/spanner_v1/types/keys.py @@ -139,19 +139,19 @@ class KeyRange(proto.Message): because ``Key`` is a descending column in the schema. Attributes: - start_closed (~.struct.ListValue): + start_closed (google.protobuf.struct_pb2.ListValue): If the start is closed, then the range includes all rows whose first ``len(start_closed)`` key columns exactly match ``start_closed``. - start_open (~.struct.ListValue): + start_open (google.protobuf.struct_pb2.ListValue): If the start is open, then the range excludes rows whose first ``len(start_open)`` key columns exactly match ``start_open``. - end_closed (~.struct.ListValue): + end_closed (google.protobuf.struct_pb2.ListValue): If the end is closed, then the range includes all rows whose first ``len(end_closed)`` key columns exactly match ``end_closed``. - end_open (~.struct.ListValue): + end_open (google.protobuf.struct_pb2.ListValue): If the end is open, then the range excludes rows whose first ``len(end_open)`` key columns exactly match ``end_open``. """ @@ -183,13 +183,13 @@ class KeySet(proto.Message): Spanner behaves as if the key were only specified once. Attributes: - keys (Sequence[~.struct.ListValue]): + keys (Sequence[google.protobuf.struct_pb2.ListValue]): A list of specific keys. Entries in ``keys`` should have exactly as many elements as there are columns in the primary or index key with which this ``KeySet`` is used. Individual key values are encoded as described [here][google.spanner.v1.TypeCode]. - ranges (Sequence[~.gs_keys.KeyRange]): + ranges (Sequence[google.cloud.spanner_v1.types.KeyRange]): A list of key ranges. See [KeyRange][google.spanner.v1.KeyRange] for more information about key range specifications. diff --git a/google/cloud/spanner_v1/types/mutation.py b/google/cloud/spanner_v1/types/mutation.py index 5c22aae7ee..f2204942be 100644 --- a/google/cloud/spanner_v1/types/mutation.py +++ b/google/cloud/spanner_v1/types/mutation.py @@ -31,15 +31,15 @@ class Mutation(proto.Message): [Commit][google.spanner.v1.Spanner.Commit] call. Attributes: - insert (~.mutation.Mutation.Write): + insert (google.cloud.spanner_v1.types.Mutation.Write): Insert new rows in a table. If any of the rows already exist, the write or transaction fails with error ``ALREADY_EXISTS``. - update (~.mutation.Mutation.Write): + update (google.cloud.spanner_v1.types.Mutation.Write): Update existing rows in a table. If any of the rows does not already exist, the transaction fails with error ``NOT_FOUND``. - insert_or_update (~.mutation.Mutation.Write): + insert_or_update (google.cloud.spanner_v1.types.Mutation.Write): Like [insert][google.spanner.v1.Mutation.insert], except that if the row already exists, then its column values are overwritten with the ones provided. Any column values not @@ -52,7 +52,7 @@ class Mutation(proto.Message): ``NOT NULL`` columns in the table must be given a value. This holds true even when the row already exists and will therefore actually be updated. - replace (~.mutation.Mutation.Write): + replace (google.cloud.spanner_v1.types.Mutation.Write): Like [insert][google.spanner.v1.Mutation.insert], except that if the row already exists, it is deleted, and the column values provided are inserted instead. Unlike @@ -64,7 +64,7 @@ class Mutation(proto.Message): the ``ON DELETE CASCADE`` annotation, then replacing a parent row also deletes the child rows. Otherwise, you must delete the child rows before you replace the parent row. - delete (~.mutation.Mutation.Delete): + delete (google.cloud.spanner_v1.types.Mutation.Delete): Delete rows from a table. Succeeds whether or not the named rows were present. """ @@ -87,7 +87,7 @@ class Write(proto.Message): The list of columns must contain enough columns to allow Cloud Spanner to derive values for all primary key columns in the row(s) to be modified. - values (Sequence[~.struct.ListValue]): + values (Sequence[google.protobuf.struct_pb2.ListValue]): The values to be written. ``values`` can contain more than one list of values. If it does, then multiple rows are written, one for each entry in ``values``. Each list in @@ -115,7 +115,7 @@ class Delete(proto.Message): table (str): Required. The table whose rows will be deleted. - key_set (~.keys.KeySet): + key_set (google.cloud.spanner_v1.types.KeySet): Required. The primary keys of the rows within [table][google.spanner.v1.Mutation.Delete.table] to delete. The primary keys must be specified in the order in which diff --git a/google/cloud/spanner_v1/types/query_plan.py b/google/cloud/spanner_v1/types/query_plan.py index 5a0f8b5fbb..c3c3a536d6 100644 --- a/google/cloud/spanner_v1/types/query_plan.py +++ b/google/cloud/spanner_v1/types/query_plan.py @@ -34,7 +34,7 @@ class PlanNode(proto.Message): index (int): The ``PlanNode``'s index in [node list][google.spanner.v1.QueryPlan.plan_nodes]. - kind (~.query_plan.PlanNode.Kind): + kind (google.cloud.spanner_v1.types.PlanNode.Kind): Used to determine the type of node. May be needed for visualizing different kinds of nodes differently. For example, If the node is a @@ -43,13 +43,13 @@ class PlanNode(proto.Message): directly embed a description of the node in its parent. display_name (str): The display name for the node. - child_links (Sequence[~.query_plan.PlanNode.ChildLink]): + child_links (Sequence[google.cloud.spanner_v1.types.PlanNode.ChildLink]): List of child node ``index``\ es and their relationship to this parent. - short_representation (~.query_plan.PlanNode.ShortRepresentation): + short_representation (google.cloud.spanner_v1.types.PlanNode.ShortRepresentation): Condensed representation for [SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR] nodes. - metadata (~.struct.Struct): + metadata (google.protobuf.struct_pb2.Struct): Attributes relevant to the node contained in a group of key-value pairs. For example, a Parameter Reference node could have the following information in its metadata: @@ -60,7 +60,7 @@ class PlanNode(proto.Message): "parameter_reference": "param1", "parameter_type": "array" } - execution_stats (~.struct.Struct): + execution_stats (google.protobuf.struct_pb2.Struct): The execution statistics associated with the node, contained in a group of key-value pairs. Only present if the plan was returned as a @@ -118,7 +118,7 @@ class ShortRepresentation(proto.Message): description (str): A string representation of the expression subtree rooted at this node. - subqueries (Sequence[~.query_plan.PlanNode.ShortRepresentation.SubqueriesEntry]): + subqueries (Sequence[google.cloud.spanner_v1.types.PlanNode.ShortRepresentation.SubqueriesEntry]): A mapping of (subquery variable name) -> (subquery node id) for cases where the ``description`` string of this node references a ``SCALAR`` subquery contained in the expression @@ -152,7 +152,7 @@ class QueryPlan(proto.Message): plan. Attributes: - plan_nodes (Sequence[~.query_plan.PlanNode]): + plan_nodes (Sequence[google.cloud.spanner_v1.types.PlanNode]): The nodes in the query plan. Plan nodes are returned in pre-order starting with the plan root. Each [PlanNode][google.spanner.v1.PlanNode]'s ``id`` corresponds diff --git a/google/cloud/spanner_v1/types/result_set.py b/google/cloud/spanner_v1/types/result_set.py index 71b4dceac2..9112ae63a0 100644 --- a/google/cloud/spanner_v1/types/result_set.py +++ b/google/cloud/spanner_v1/types/result_set.py @@ -35,17 +35,17 @@ class ResultSet(proto.Message): [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. Attributes: - metadata (~.result_set.ResultSetMetadata): + metadata (google.cloud.spanner_v1.types.ResultSetMetadata): Metadata about the result set, such as row type information. - rows (Sequence[~.struct.ListValue]): + rows (Sequence[google.protobuf.struct_pb2.ListValue]): Each element in ``rows`` is a row whose format is defined by [metadata.row_type][google.spanner.v1.ResultSetMetadata.row_type]. The ith element in each row matches the ith field in [metadata.row_type][google.spanner.v1.ResultSetMetadata.row_type]. Elements are encoded based on type as described [here][google.spanner.v1.TypeCode]. - stats (~.result_set.ResultSetStats): + stats (google.cloud.spanner_v1.types.ResultSetStats): Query plan and execution statistics for the SQL statement that produced this result set. These can be requested by setting @@ -71,11 +71,11 @@ class PartialResultSet(proto.Message): rows, and large values, but are a little trickier to consume. Attributes: - metadata (~.result_set.ResultSetMetadata): + metadata (google.cloud.spanner_v1.types.ResultSetMetadata): Metadata about the result set, such as row type information. Only present in the first response. - values (Sequence[~.struct.Value]): + values (Sequence[google.protobuf.struct_pb2.Value]): A streamed result set consists of a stream of values, which might be split into many ``PartialResultSet`` messages to accommodate large rows and/or large values. Every N complete @@ -170,7 +170,7 @@ class PartialResultSet(proto.Message): request and including ``resume_token``. Note that executing any other transaction in the same session invalidates the token. - stats (~.result_set.ResultSetStats): + stats (google.cloud.spanner_v1.types.ResultSetStats): Query plan and execution statistics for the statement that produced this streaming result set. These can be requested by setting @@ -196,7 +196,7 @@ class ResultSetMetadata(proto.Message): [PartialResultSet][google.spanner.v1.PartialResultSet]. Attributes: - row_type (~.gs_type.StructType): + row_type (google.cloud.spanner_v1.types.StructType): Indicates the field names and types for the rows in the result set. For example, a SQL query like ``"SELECT UserId, UserName FROM Users"`` could return a @@ -208,7 +208,7 @@ class ResultSetMetadata(proto.Message): { "name": "UserId", "type": { "code": "INT64" } }, { "name": "UserName", "type": { "code": "STRING" } }, ] - transaction (~.gs_transaction.Transaction): + transaction (google.cloud.spanner_v1.types.Transaction): If the read or SQL query began a transaction as a side-effect, the information about the new transaction is yielded here. @@ -227,10 +227,10 @@ class ResultSetStats(proto.Message): [PartialResultSet][google.spanner.v1.PartialResultSet]. Attributes: - query_plan (~.gs_query_plan.QueryPlan): + query_plan (google.cloud.spanner_v1.types.QueryPlan): [QueryPlan][google.spanner.v1.QueryPlan] for the query associated with this result. - query_stats (~.struct.Struct): + query_stats (google.protobuf.struct_pb2.Struct): Aggregated statistics from the execution of the query. Only present when the query is profiled. For example, a query could return the statistics as follows: diff --git a/google/cloud/spanner_v1/types/spanner.py b/google/cloud/spanner_v1/types/spanner.py index eeffd2bde5..1dfd8451fe 100644 --- a/google/cloud/spanner_v1/types/spanner.py +++ b/google/cloud/spanner_v1/types/spanner.py @@ -64,7 +64,7 @@ class CreateSessionRequest(proto.Message): database (str): Required. The database in which the new session is created. - session (~.spanner.Session): + session (google.cloud.spanner_v1.types.Session): The session to create. """ @@ -81,7 +81,7 @@ class BatchCreateSessionsRequest(proto.Message): database (str): Required. The database in which the new sessions are created. - session_template (~.spanner.Session): + session_template (google.cloud.spanner_v1.types.Session): Parameters to be applied to each created session. session_count (int): @@ -106,7 +106,7 @@ class BatchCreateSessionsResponse(proto.Message): [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. Attributes: - session (Sequence[~.spanner.Session]): + session (Sequence[google.cloud.spanner_v1.types.Session]): The freshly created sessions. """ @@ -118,10 +118,9 @@ class Session(proto.Message): Attributes: name (str): - The name of the session. This is always - system-assigned; values provided when creating a - session are ignored. - labels (Sequence[~.spanner.Session.LabelsEntry]): + Output only. The name of the session. This is + always system-assigned. + labels (Sequence[google.cloud.spanner_v1.types.Session.LabelsEntry]): The labels for the session. - Label keys must be between 1 and 63 characters long and @@ -135,10 +134,10 @@ class Session(proto.Message): See https://goo.gl/xmQnxf for more information on and examples of labels. - create_time (~.timestamp.Timestamp): + create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. The timestamp when the session is created. - approximate_last_use_time (~.timestamp.Timestamp): + approximate_last_use_time (google.protobuf.timestamp_pb2.Timestamp): Output only. The approximate timestamp when the session is last used. It is typically earlier than the actual last use time. @@ -212,7 +211,7 @@ class ListSessionsResponse(proto.Message): [ListSessions][google.spanner.v1.Spanner.ListSessions]. Attributes: - sessions (Sequence[~.spanner.Session]): + sessions (Sequence[google.cloud.spanner_v1.types.Session]): The list of requested sessions. next_page_token (str): ``next_page_token`` can be sent in a subsequent @@ -250,7 +249,7 @@ class ExecuteSqlRequest(proto.Message): session (str): Required. The session in which the SQL query should be performed. - transaction (~.gs_transaction.TransactionSelector): + transaction (google.cloud.spanner_v1.types.TransactionSelector): The transaction to use. For queries, if none is provided, the default is a temporary read-only transaction with strong @@ -265,14 +264,15 @@ class ExecuteSqlRequest(proto.Message): DML transaction ID. sql (str): Required. The SQL string. - params (~.struct.Struct): + params (google.protobuf.struct_pb2.Struct): Parameter names and values that bind to placeholders in the SQL string. A parameter placeholder consists of the ``@`` character followed by the parameter name (for example, - ``@firstName``). Parameter names can contain letters, - numbers, and underscores. + ``@firstName``). Parameter names must conform to the naming + requirements of identifiers as specified at + https://cloud.google.com/spanner/docs/lexical#identifiers. Parameters can appear anywhere that a literal value is expected. The same parameter name can be used more than @@ -282,7 +282,7 @@ class ExecuteSqlRequest(proto.Message): It is an error to execute a SQL statement with unbound parameters. - param_types (Sequence[~.spanner.ExecuteSqlRequest.ParamTypesEntry]): + param_types (Sequence[google.cloud.spanner_v1.types.ExecuteSqlRequest.ParamTypesEntry]): It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type ``BYTES`` and values of type ``STRING`` both appear in @@ -303,7 +303,7 @@ class ExecuteSqlRequest(proto.Message): SQL statement execution to resume where the last one left off. The rest of the request parameters must exactly match the request that yielded this token. - query_mode (~.spanner.ExecuteSqlRequest.QueryMode): + query_mode (google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryMode): Used to control the amount of debugging information returned in [ResultSetStats][google.spanner.v1.ResultSetStats]. If [partition_token][google.spanner.v1.ExecuteSqlRequest.partition_token] @@ -332,7 +332,7 @@ class ExecuteSqlRequest(proto.Message): yield the same response as the first execution. Required for DML statements. Ignored for queries. - query_options (~.spanner.ExecuteSqlRequest.QueryOptions): + query_options (google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions): Query optimizer configuration to use for the given query. """ @@ -362,7 +362,9 @@ class QueryOptions(proto.Message): optimizer versions can be queried from SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement with an invalid optimizer version will fail with a - syntax error (``INVALID_ARGUMENT``) status. + syntax error (``INVALID_ARGUMENT``) status. See + https://cloud.google.com/spanner/docs/query-optimizer/manage-query-optimizer + for more information on managing the query optimizer. The ``optimizer_version`` statement hint has precedence over this setting. @@ -403,14 +405,14 @@ class ExecuteBatchDmlRequest(proto.Message): session (str): Required. The session in which the DML statements should be performed. - transaction (~.gs_transaction.TransactionSelector): + transaction (google.cloud.spanner_v1.types.TransactionSelector): Required. The transaction to use. Must be a read-write transaction. To protect against replays, single-use transactions are not supported. The caller must either supply an existing transaction ID or begin a new transaction. - statements (Sequence[~.spanner.ExecuteBatchDmlRequest.Statement]): + statements (Sequence[google.cloud.spanner_v1.types.ExecuteBatchDmlRequest.Statement]): Required. The list of statements to execute in this batch. Statements are executed serially, such that the effects of statement ``i`` are visible to statement ``i+1``. Each @@ -440,7 +442,7 @@ class Statement(proto.Message): Attributes: sql (str): Required. The DML string. - params (~.struct.Struct): + params (google.protobuf.struct_pb2.Struct): Parameter names and values that bind to placeholders in the DML string. @@ -457,7 +459,7 @@ class Statement(proto.Message): It is an error to execute a SQL statement with unbound parameters. - param_types (Sequence[~.spanner.ExecuteBatchDmlRequest.Statement.ParamTypesEntry]): + param_types (Sequence[google.cloud.spanner_v1.types.ExecuteBatchDmlRequest.Statement.ParamTypesEntry]): It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type ``BYTES`` and values of type ``STRING`` both appear in @@ -526,7 +528,7 @@ class ExecuteBatchDmlResponse(proto.Message): were not executed. Attributes: - result_sets (Sequence[~.result_set.ResultSet]): + result_sets (Sequence[google.cloud.spanner_v1.types.ResultSet]): One [ResultSet][google.spanner.v1.ResultSet] for each statement in the request that ran successfully, in the same order as the statements in the request. Each @@ -539,7 +541,7 @@ class ExecuteBatchDmlResponse(proto.Message): Only the first [ResultSet][google.spanner.v1.ResultSet] in the response contains valid [ResultSetMetadata][google.spanner.v1.ResultSetMetadata]. - status (~.gr_status.Status): + status (google.rpc.status_pb2.Status): If all DML statements are executed successfully, the status is ``OK``. Otherwise, the error status of the first failed statement. @@ -590,7 +592,7 @@ class PartitionQueryRequest(proto.Message): session (str): Required. The session used to create the partitions. - transaction (~.gs_transaction.TransactionSelector): + transaction (google.cloud.spanner_v1.types.TransactionSelector): Read only snapshot transactions are supported, read/write and single use transactions are not. @@ -608,7 +610,7 @@ class PartitionQueryRequest(proto.Message): [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] with a PartitionedDml transaction for large, partition-friendly DML operations. - params (~.struct.Struct): + params (google.protobuf.struct_pb2.Struct): Parameter names and values that bind to placeholders in the SQL string. @@ -625,7 +627,7 @@ class PartitionQueryRequest(proto.Message): It is an error to execute a SQL statement with unbound parameters. - param_types (Sequence[~.spanner.PartitionQueryRequest.ParamTypesEntry]): + param_types (Sequence[google.cloud.spanner_v1.types.PartitionQueryRequest.ParamTypesEntry]): It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type ``BYTES`` and values of type ``STRING`` both appear in @@ -636,7 +638,7 @@ class PartitionQueryRequest(proto.Message): exact SQL type for some or all of the SQL query parameters. See the definition of [Type][google.spanner.v1.Type] for more information about SQL types. - partition_options (~.spanner.PartitionOptions): + partition_options (google.cloud.spanner_v1.types.PartitionOptions): Additional options that affect how many partitions are created. """ @@ -668,7 +670,7 @@ class PartitionReadRequest(proto.Message): session (str): Required. The session used to create the partitions. - transaction (~.gs_transaction.TransactionSelector): + transaction (google.cloud.spanner_v1.types.TransactionSelector): Read only snapshot transactions are supported, read/write and single use transactions are not. @@ -688,7 +690,7 @@ class PartitionReadRequest(proto.Message): The columns of [table][google.spanner.v1.PartitionReadRequest.table] to be returned for each row matching this request. - key_set (~.keys.KeySet): + key_set (google.cloud.spanner_v1.types.KeySet): Required. ``key_set`` identifies the rows to be yielded. ``key_set`` names the primary keys of the rows in [table][google.spanner.v1.PartitionReadRequest.table] to be @@ -704,7 +706,7 @@ class PartitionReadRequest(proto.Message): It is not an error for the ``key_set`` to name rows that do not exist in the database. Read yields nothing for nonexistent rows. - partition_options (~.spanner.PartitionOptions): + partition_options (google.cloud.spanner_v1.types.PartitionOptions): Additional options that affect how many partitions are created. """ @@ -750,9 +752,9 @@ class PartitionResponse(proto.Message): [PartitionRead][google.spanner.v1.Spanner.PartitionRead] Attributes: - partitions (Sequence[~.spanner.Partition]): + partitions (Sequence[google.cloud.spanner_v1.types.Partition]): Partitions created by this request. - transaction (~.gs_transaction.Transaction): + transaction (google.cloud.spanner_v1.types.Transaction): Transaction created by this request. """ @@ -771,7 +773,7 @@ class ReadRequest(proto.Message): session (str): Required. The session in which the read should be performed. - transaction (~.gs_transaction.TransactionSelector): + transaction (google.cloud.spanner_v1.types.TransactionSelector): The transaction to use. If none is provided, the default is a temporary read-only transaction with strong concurrency. @@ -790,7 +792,7 @@ class ReadRequest(proto.Message): Required. The columns of [table][google.spanner.v1.ReadRequest.table] to be returned for each row matching this request. - key_set (~.keys.KeySet): + key_set (google.cloud.spanner_v1.types.KeySet): Required. ``key_set`` identifies the rows to be yielded. ``key_set`` names the primary keys of the rows in [table][google.spanner.v1.ReadRequest.table] to be yielded, @@ -864,7 +866,7 @@ class BeginTransactionRequest(proto.Message): session (str): Required. The session in which the transaction runs. - options (~.gs_transaction.TransactionOptions): + options (google.cloud.spanner_v1.types.TransactionOptions): Required. Options for the new transaction. """ @@ -884,7 +886,7 @@ class CommitRequest(proto.Message): transaction to be committed is running. transaction_id (bytes): Commit a previously-started transaction. - single_use_transaction (~.gs_transaction.TransactionOptions): + single_use_transaction (google.cloud.spanner_v1.types.TransactionOptions): Execute mutations in a temporary transaction. Note that unlike commit of a previously-started transaction, commit with a temporary transaction is non-idempotent. That is, if @@ -894,11 +896,16 @@ class CommitRequest(proto.Message): are executed more than once. If this is undesirable, use [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction] and [Commit][google.spanner.v1.Spanner.Commit] instead. - mutations (Sequence[~.mutation.Mutation]): + mutations (Sequence[google.cloud.spanner_v1.types.Mutation]): The mutations to be executed when this transaction commits. All mutations are applied atomically, in the order they appear in this list. + return_commit_stats (bool): + If ``true``, then statistics related to the transaction will + be included in the + [CommitResponse][google.spanner.v1.CommitResponse.commit_stats]. + Default value is ``false``. """ session = proto.Field(proto.STRING, number=1) @@ -914,20 +921,46 @@ class CommitRequest(proto.Message): mutations = proto.RepeatedField(proto.MESSAGE, number=4, message=mutation.Mutation,) + return_commit_stats = proto.Field(proto.BOOL, number=5) + class CommitResponse(proto.Message): r"""The response for [Commit][google.spanner.v1.Spanner.Commit]. Attributes: - commit_timestamp (~.timestamp.Timestamp): + commit_timestamp (google.protobuf.timestamp_pb2.Timestamp): The Cloud Spanner timestamp at which the transaction committed. + commit_stats (google.cloud.spanner_v1.types.CommitResponse.CommitStats): + The statistics about this Commit. Not returned by default. + For more information, see + [CommitRequest.return_commit_stats][google.spanner.v1.CommitRequest.return_commit_stats]. """ + class CommitStats(proto.Message): + r"""Additional statistics about a commit. + + Attributes: + mutation_count (int): + The total number of mutations for the transaction. Knowing + the ``mutation_count`` value can help you maximize the + number of mutations in a transaction and minimize the number + of API round trips. You can also monitor this value to + prevent transactions from exceeding the system + `limit `__. + If the number of mutations exceeds the limit, the server + returns + `INVALID_ARGUMENT `__. + """ + + mutation_count = proto.Field(proto.INT64, number=1) + commit_timestamp = proto.Field( proto.MESSAGE, number=1, message=timestamp.Timestamp, ) + commit_stats = proto.Field(proto.MESSAGE, number=2, message=CommitStats,) + class RollbackRequest(proto.Message): r"""The request for [Rollback][google.spanner.v1.Spanner.Rollback]. diff --git a/google/cloud/spanner_v1/types/transaction.py b/google/cloud/spanner_v1/types/transaction.py index 7b50f228e5..bcbbddd72c 100644 --- a/google/cloud/spanner_v1/types/transaction.py +++ b/google/cloud/spanner_v1/types/transaction.py @@ -40,14 +40,14 @@ class TransactionOptions(proto.Message): Authorization to begin a read-write transaction requires ``spanner.databases.beginOrRollbackReadWriteTransaction`` permission on the ``session`` resource. - partitioned_dml (~.transaction.TransactionOptions.PartitionedDml): + partitioned_dml (google.cloud.spanner_v1.types.TransactionOptions.PartitionedDml): Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires ``spanner.databases.beginPartitionedDmlTransaction`` permission on the ``session`` resource. - read_only (~.transaction.TransactionOptions.ReadOnly): + read_only (google.cloud.spanner_v1.types.TransactionOptions.ReadOnly): Transaction will not write. Authorization to begin a read-only transaction requires @@ -70,7 +70,7 @@ class ReadOnly(proto.Message): strong (bool): Read at a timestamp where all previously committed transactions are visible. - min_read_timestamp (~.timestamp.Timestamp): + min_read_timestamp (google.protobuf.timestamp_pb2.Timestamp): Executes all reads at a timestamp >= ``min_read_timestamp``. This is useful for requesting fresher data than some @@ -83,7 +83,7 @@ class ReadOnly(proto.Message): A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: ``"2014-10-02T15:01:23.045123456Z"``. - max_staleness (~.duration.Duration): + max_staleness (google.protobuf.duration_pb2.Duration): Read data at a timestamp >= ``NOW - max_staleness`` seconds. Guarantees that all writes that have committed more than the specified number of seconds ago are visible. Because Cloud @@ -97,7 +97,7 @@ class ReadOnly(proto.Message): Note that this option can only be used in single-use transactions. - read_timestamp (~.timestamp.Timestamp): + read_timestamp (google.protobuf.timestamp_pb2.Timestamp): Executes all reads at the given timestamp. Unlike other modes, reads at a specific timestamp are repeatable; the same read at the same timestamp always returns the same @@ -110,7 +110,7 @@ class ReadOnly(proto.Message): A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: ``"2014-10-02T15:01:23.045123456Z"``. - exact_staleness (~.duration.Duration): + exact_staleness (google.protobuf.duration_pb2.Duration): Executes all reads at a timestamp that is ``exact_staleness`` old. The timestamp is chosen soon after the read is started. @@ -178,7 +178,7 @@ class Transaction(proto.Message): Single-use read-only transactions do not have IDs, because single-use transactions do not support multiple requests. - read_timestamp (~.timestamp.Timestamp): + read_timestamp (google.protobuf.timestamp_pb2.Timestamp): For snapshot read-only transactions, the read timestamp chosen for the transaction. Not returned by default: see [TransactionOptions.ReadOnly.return_read_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.return_read_timestamp]. @@ -201,7 +201,7 @@ class TransactionSelector(proto.Message): more information about transactions. Attributes: - single_use (~.transaction.TransactionOptions): + single_use (google.cloud.spanner_v1.types.TransactionOptions): Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single @@ -209,7 +209,7 @@ class TransactionSelector(proto.Message): id (bytes): Execute the read or SQL query in a previously-started transaction. - begin (~.transaction.TransactionOptions): + begin (google.cloud.spanner_v1.types.TransactionOptions): Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in diff --git a/google/cloud/spanner_v1/types/type.py b/google/cloud/spanner_v1/types/type.py index 19a0ffe5be..0fd8d2f6a4 100644 --- a/google/cloud/spanner_v1/types/type.py +++ b/google/cloud/spanner_v1/types/type.py @@ -50,14 +50,14 @@ class Type(proto.Message): stored in a table cell or returned from an SQL query. Attributes: - code (~.gs_type.TypeCode): + code (google.cloud.spanner_v1.types.TypeCode): Required. The [TypeCode][google.spanner.v1.TypeCode] for this type. - array_element_type (~.gs_type.Type): + array_element_type (google.cloud.spanner_v1.types.Type): If [code][google.spanner.v1.Type.code] == [ARRAY][google.spanner.v1.TypeCode.ARRAY], then ``array_element_type`` is the type of the array elements. - struct_type (~.gs_type.StructType): + struct_type (google.cloud.spanner_v1.types.StructType): If [code][google.spanner.v1.Type.code] == [STRUCT][google.spanner.v1.TypeCode.STRUCT], then ``struct_type`` provides type information for the struct's @@ -76,7 +76,7 @@ class StructType(proto.Message): [STRUCT][google.spanner.v1.TypeCode.STRUCT] type. Attributes: - fields (Sequence[~.gs_type.StructType.Field]): + fields (Sequence[google.cloud.spanner_v1.types.StructType.Field]): The list of fields that make up this struct. Order is significant, because values of this struct type are represented as lists, where the order of field values @@ -97,9 +97,9 @@ class Field(proto.Message): the query ``"SELECT 'hello' AS Word"``), or the column name (e.g., ``"ColName"`` in the query ``"SELECT ColName FROM Table"``). Some columns might have an - empty name (e.g., `"SELECT UPPER(ColName)"`). Note that a + empty name (e.g., ``"SELECT UPPER(ColName)"``). Note that a query result can contain multiple fields with the same name. - type_ (~.gs_type.Type): + type_ (google.cloud.spanner_v1.types.Type): The type of the field. """ diff --git a/scripts/fixup_spanner_v1_keywords.py b/scripts/fixup_spanner_v1_keywords.py index bb76ae0e8c..19e3c0185b 100644 --- a/scripts/fixup_spanner_v1_keywords.py +++ b/scripts/fixup_spanner_v1_keywords.py @@ -43,7 +43,7 @@ class spannerCallTransformer(cst.CSTTransformer): METHOD_TO_PARAMS: Dict[str, Tuple[str]] = { 'batch_create_sessions': ('database', 'session_count', 'session_template', ), 'begin_transaction': ('session', 'options', ), - 'commit': ('session', 'transaction_id', 'single_use_transaction', 'mutations', ), + 'commit': ('session', 'transaction_id', 'single_use_transaction', 'mutations', 'return_commit_stats', ), 'create_session': ('database', 'session', ), 'delete_session': ('name', ), 'execute_batch_dml': ('session', 'transaction', 'statements', 'seqno', ), diff --git a/synth.metadata b/synth.metadata index 99b49c42da..8e7ae4d697 100644 --- a/synth.metadata +++ b/synth.metadata @@ -4,15 +4,15 @@ "git": { "name": ".", "remote": "https://github.com/googleapis/python-spanner.git", - "sha": "2faf01b135360586ef27c66976646593fd85fd1e" + "sha": "be27507c51998e5a4aec54cab57515c4912f5ed5" } }, { "git": { "name": "googleapis", "remote": "https://github.com/googleapis/googleapis.git", - "sha": "dd372aa22ded7a8ba6f0e03a80e06358a3fa0907", - "internalRef": "347055288" + "sha": "20712b8fe95001b312f62c6c5f33e3e3ec92cfaf", + "internalRef": "354996675" } }, { @@ -112,11 +112,14 @@ "docs/_templates/layout.html", "docs/conf.py", "docs/multiprocessing.rst", + "docs/spanner_admin_database_v1/database_admin.rst", "docs/spanner_admin_database_v1/services.rst", "docs/spanner_admin_database_v1/types.rst", + "docs/spanner_admin_instance_v1/instance_admin.rst", "docs/spanner_admin_instance_v1/services.rst", "docs/spanner_admin_instance_v1/types.rst", "docs/spanner_v1/services.rst", + "docs/spanner_v1/spanner.rst", "docs/spanner_v1/types.rst", "google/cloud/spanner_admin_database_v1/__init__.py", "google/cloud/spanner_admin_database_v1/proto/backup.proto", diff --git a/tests/unit/gapic/spanner_admin_database_v1/test_database_admin.py b/tests/unit/gapic/spanner_admin_database_v1/test_database_admin.py index 7779e49659..ebe241df35 100644 --- a/tests/unit/gapic/spanner_admin_database_v1/test_database_admin.py +++ b/tests/unit/gapic/spanner_admin_database_v1/test_database_admin.py @@ -101,8 +101,21 @@ def test__get_default_mtls_endpoint(): ) +def test_database_admin_client_from_service_account_info(): + creds = credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_info" + ) as factory: + factory.return_value = creds + info = {"valid": True} + client = DatabaseAdminClient.from_service_account_info(info) + assert client.transport._credentials == creds + + assert client.transport._host == "spanner.googleapis.com:443" + + @pytest.mark.parametrize( - "client_class", [DatabaseAdminClient, DatabaseAdminAsyncClient] + "client_class", [DatabaseAdminClient, DatabaseAdminAsyncClient,] ) def test_database_admin_client_from_service_account_file(client_class): creds = credentials.AnonymousCredentials() @@ -121,7 +134,10 @@ def test_database_admin_client_from_service_account_file(client_class): def test_database_admin_client_get_transport_class(): transport = DatabaseAdminClient.get_transport_class() - assert transport == transports.DatabaseAdminGrpcTransport + available_transports = [ + transports.DatabaseAdminGrpcTransport, + ] + assert transport in available_transports transport = DatabaseAdminClient.get_transport_class("grpc") assert transport == transports.DatabaseAdminGrpcTransport @@ -172,7 +188,7 @@ def test_database_admin_client_client_options( credentials_file=None, host="squid.clam.whelk", scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -188,7 +204,7 @@ def test_database_admin_client_client_options( credentials_file=None, host=client.DEFAULT_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -204,7 +220,7 @@ def test_database_admin_client_client_options( credentials_file=None, host=client.DEFAULT_MTLS_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -232,7 +248,7 @@ def test_database_admin_client_client_options( credentials_file=None, host=client.DEFAULT_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id="octopus", client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -283,29 +299,25 @@ def test_database_admin_client_mtls_env_auto( client_cert_source=client_cert_source_callback ) with mock.patch.object(transport_class, "__init__") as patched: - ssl_channel_creds = mock.Mock() - with mock.patch( - "grpc.ssl_channel_credentials", return_value=ssl_channel_creds - ): - patched.return_value = None - client = client_class(client_options=options) + patched.return_value = None + client = client_class(client_options=options) - if use_client_cert_env == "false": - expected_ssl_channel_creds = None - expected_host = client.DEFAULT_ENDPOINT - else: - expected_ssl_channel_creds = ssl_channel_creds - expected_host = client.DEFAULT_MTLS_ENDPOINT + if use_client_cert_env == "false": + expected_client_cert_source = None + expected_host = client.DEFAULT_ENDPOINT + else: + expected_client_cert_source = client_cert_source_callback + expected_host = client.DEFAULT_MTLS_ENDPOINT - patched.assert_called_once_with( - credentials=None, - credentials_file=None, - host=expected_host, - scopes=None, - ssl_channel_credentials=expected_ssl_channel_creds, - quota_project_id=None, - client_info=transports.base.DEFAULT_CLIENT_INFO, - ) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + client_cert_source_for_mtls=expected_client_cert_source, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) # Check the case ADC client cert is provided. Whether client cert is used depends on # GOOGLE_API_USE_CLIENT_CERTIFICATE value. @@ -314,66 +326,53 @@ def test_database_admin_client_mtls_env_auto( ): with mock.patch.object(transport_class, "__init__") as patched: with mock.patch( - "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=True, ): with mock.patch( - "google.auth.transport.grpc.SslCredentials.is_mtls", - new_callable=mock.PropertyMock, - ) as is_mtls_mock: - with mock.patch( - "google.auth.transport.grpc.SslCredentials.ssl_credentials", - new_callable=mock.PropertyMock, - ) as ssl_credentials_mock: - if use_client_cert_env == "false": - is_mtls_mock.return_value = False - ssl_credentials_mock.return_value = None - expected_host = client.DEFAULT_ENDPOINT - expected_ssl_channel_creds = None - else: - is_mtls_mock.return_value = True - ssl_credentials_mock.return_value = mock.Mock() - expected_host = client.DEFAULT_MTLS_ENDPOINT - expected_ssl_channel_creds = ( - ssl_credentials_mock.return_value - ) - - patched.return_value = None - client = client_class() - patched.assert_called_once_with( - credentials=None, - credentials_file=None, - host=expected_host, - scopes=None, - ssl_channel_credentials=expected_ssl_channel_creds, - quota_project_id=None, - client_info=transports.base.DEFAULT_CLIENT_INFO, - ) + "google.auth.transport.mtls.default_client_cert_source", + return_value=client_cert_source_callback, + ): + if use_client_cert_env == "false": + expected_host = client.DEFAULT_ENDPOINT + expected_client_cert_source = None + else: + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_client_cert_source = client_cert_source_callback - # Check the case client_cert_source and ADC client cert are not provided. - with mock.patch.dict( - os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} - ): - with mock.patch.object(transport_class, "__init__") as patched: - with mock.patch( - "google.auth.transport.grpc.SslCredentials.__init__", return_value=None - ): - with mock.patch( - "google.auth.transport.grpc.SslCredentials.is_mtls", - new_callable=mock.PropertyMock, - ) as is_mtls_mock: - is_mtls_mock.return_value = False patched.return_value = None client = client_class() patched.assert_called_once_with( credentials=None, credentials_file=None, - host=client.DEFAULT_ENDPOINT, + host=expected_host, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=expected_client_cert_source, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=False, + ): + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + @pytest.mark.parametrize( "client_class,transport_class,transport_name", @@ -399,7 +398,7 @@ def test_database_admin_client_client_options_scopes( credentials_file=None, host=client.DEFAULT_ENDPOINT, scopes=["1", "2"], - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -429,7 +428,7 @@ def test_database_admin_client_client_options_credentials_file( credentials_file="credentials.json", host=client.DEFAULT_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -448,7 +447,7 @@ def test_database_admin_client_client_options_from_dict(): credentials_file=None, host="squid.clam.whelk", scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -1022,7 +1021,9 @@ def test_get_database( with mock.patch.object(type(client.transport.get_database), "__call__") as call: # Designate an appropriate return value for the call. call.return_value = spanner_database_admin.Database( - name="name_value", state=spanner_database_admin.Database.State.CREATING, + name="name_value", + state=spanner_database_admin.Database.State.CREATING, + version_retention_period="version_retention_period_value", ) response = client.get_database(request) @@ -1041,6 +1042,8 @@ def test_get_database( assert response.state == spanner_database_admin.Database.State.CREATING + assert response.version_retention_period == "version_retention_period_value" + def test_get_database_from_dict(): test_get_database(request_type=dict) @@ -1064,7 +1067,9 @@ async def test_get_database_async( # Designate an appropriate return value for the call. call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( spanner_database_admin.Database( - name="name_value", state=spanner_database_admin.Database.State.CREATING, + name="name_value", + state=spanner_database_admin.Database.State.CREATING, + version_retention_period="version_retention_period_value", ) ) @@ -1083,6 +1088,8 @@ async def test_get_database_async( assert response.state == spanner_database_admin.Database.State.CREATING + assert response.version_retention_period == "version_retention_period_value" + @pytest.mark.asyncio async def test_get_database_async_from_dict(): @@ -4696,6 +4703,54 @@ def test_database_admin_transport_auth_adc(): ) +@pytest.mark.parametrize( + "transport_class", + [ + transports.DatabaseAdminGrpcTransport, + transports.DatabaseAdminGrpcAsyncIOTransport, + ], +) +def test_database_admin_grpc_transport_client_cert_source_for_mtls(transport_class): + cred = credentials.AnonymousCredentials() + + # Check ssl_channel_credentials is used if provided. + with mock.patch.object(transport_class, "create_channel") as mock_create_channel: + mock_ssl_channel_creds = mock.Mock() + transport_class( + host="squid.clam.whelk", + credentials=cred, + ssl_channel_credentials=mock_ssl_channel_creds, + ) + mock_create_channel.assert_called_once_with( + "squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + ssl_credentials=mock_ssl_channel_creds, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + # Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls + # is used. + with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()): + with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred: + transport_class( + credentials=cred, + client_cert_source_for_mtls=client_cert_source_callback, + ) + expected_cert, expected_key = client_cert_source_callback() + mock_ssl_cred.assert_called_once_with( + certificate_chain=expected_cert, private_key=expected_key + ) + + def test_database_admin_host_no_port(): client = DatabaseAdminClient( credentials=credentials.AnonymousCredentials(), @@ -4717,7 +4772,7 @@ def test_database_admin_host_with_port(): def test_database_admin_grpc_transport_channel(): - channel = grpc.insecure_channel("http://localhost/") + channel = grpc.secure_channel("http://localhost/", grpc.local_channel_credentials()) # Check that channel is used if provided. transport = transports.DatabaseAdminGrpcTransport( @@ -4729,7 +4784,7 @@ def test_database_admin_grpc_transport_channel(): def test_database_admin_grpc_asyncio_transport_channel(): - channel = aio.insecure_channel("http://localhost/") + channel = aio.secure_channel("http://localhost/", grpc.local_channel_credentials()) # Check that channel is used if provided. transport = transports.DatabaseAdminGrpcAsyncIOTransport( @@ -4740,6 +4795,8 @@ def test_database_admin_grpc_asyncio_transport_channel(): assert transport._ssl_channel_credentials == None +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. @pytest.mark.parametrize( "transport_class", [ @@ -4752,7 +4809,7 @@ def test_database_admin_transport_channel_mtls_with_client_cert_source(transport "grpc.ssl_channel_credentials", autospec=True ) as grpc_ssl_channel_cred: with mock.patch.object( - transport_class, "create_channel", autospec=True + transport_class, "create_channel" ) as grpc_create_channel: mock_ssl_cred = mock.Mock() grpc_ssl_channel_cred.return_value = mock_ssl_cred @@ -4793,6 +4850,8 @@ def test_database_admin_transport_channel_mtls_with_client_cert_source(transport assert transport._ssl_channel_credentials == mock_ssl_cred +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. @pytest.mark.parametrize( "transport_class", [ @@ -4808,7 +4867,7 @@ def test_database_admin_transport_channel_mtls_with_adc(transport_class): ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), ): with mock.patch.object( - transport_class, "create_channel", autospec=True + transport_class, "create_channel" ) as grpc_create_channel: mock_grpc_channel = mock.Mock() grpc_create_channel.return_value = mock_grpc_channel diff --git a/tests/unit/gapic/spanner_admin_instance_v1/test_instance_admin.py b/tests/unit/gapic/spanner_admin_instance_v1/test_instance_admin.py index bb4e98d401..e2caceee98 100644 --- a/tests/unit/gapic/spanner_admin_instance_v1/test_instance_admin.py +++ b/tests/unit/gapic/spanner_admin_instance_v1/test_instance_admin.py @@ -98,8 +98,21 @@ def test__get_default_mtls_endpoint(): ) +def test_instance_admin_client_from_service_account_info(): + creds = credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_info" + ) as factory: + factory.return_value = creds + info = {"valid": True} + client = InstanceAdminClient.from_service_account_info(info) + assert client.transport._credentials == creds + + assert client.transport._host == "spanner.googleapis.com:443" + + @pytest.mark.parametrize( - "client_class", [InstanceAdminClient, InstanceAdminAsyncClient] + "client_class", [InstanceAdminClient, InstanceAdminAsyncClient,] ) def test_instance_admin_client_from_service_account_file(client_class): creds = credentials.AnonymousCredentials() @@ -118,7 +131,10 @@ def test_instance_admin_client_from_service_account_file(client_class): def test_instance_admin_client_get_transport_class(): transport = InstanceAdminClient.get_transport_class() - assert transport == transports.InstanceAdminGrpcTransport + available_transports = [ + transports.InstanceAdminGrpcTransport, + ] + assert transport in available_transports transport = InstanceAdminClient.get_transport_class("grpc") assert transport == transports.InstanceAdminGrpcTransport @@ -169,7 +185,7 @@ def test_instance_admin_client_client_options( credentials_file=None, host="squid.clam.whelk", scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -185,7 +201,7 @@ def test_instance_admin_client_client_options( credentials_file=None, host=client.DEFAULT_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -201,7 +217,7 @@ def test_instance_admin_client_client_options( credentials_file=None, host=client.DEFAULT_MTLS_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -229,7 +245,7 @@ def test_instance_admin_client_client_options( credentials_file=None, host=client.DEFAULT_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id="octopus", client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -280,29 +296,25 @@ def test_instance_admin_client_mtls_env_auto( client_cert_source=client_cert_source_callback ) with mock.patch.object(transport_class, "__init__") as patched: - ssl_channel_creds = mock.Mock() - with mock.patch( - "grpc.ssl_channel_credentials", return_value=ssl_channel_creds - ): - patched.return_value = None - client = client_class(client_options=options) + patched.return_value = None + client = client_class(client_options=options) - if use_client_cert_env == "false": - expected_ssl_channel_creds = None - expected_host = client.DEFAULT_ENDPOINT - else: - expected_ssl_channel_creds = ssl_channel_creds - expected_host = client.DEFAULT_MTLS_ENDPOINT + if use_client_cert_env == "false": + expected_client_cert_source = None + expected_host = client.DEFAULT_ENDPOINT + else: + expected_client_cert_source = client_cert_source_callback + expected_host = client.DEFAULT_MTLS_ENDPOINT - patched.assert_called_once_with( - credentials=None, - credentials_file=None, - host=expected_host, - scopes=None, - ssl_channel_credentials=expected_ssl_channel_creds, - quota_project_id=None, - client_info=transports.base.DEFAULT_CLIENT_INFO, - ) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + client_cert_source_for_mtls=expected_client_cert_source, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) # Check the case ADC client cert is provided. Whether client cert is used depends on # GOOGLE_API_USE_CLIENT_CERTIFICATE value. @@ -311,66 +323,53 @@ def test_instance_admin_client_mtls_env_auto( ): with mock.patch.object(transport_class, "__init__") as patched: with mock.patch( - "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=True, ): with mock.patch( - "google.auth.transport.grpc.SslCredentials.is_mtls", - new_callable=mock.PropertyMock, - ) as is_mtls_mock: - with mock.patch( - "google.auth.transport.grpc.SslCredentials.ssl_credentials", - new_callable=mock.PropertyMock, - ) as ssl_credentials_mock: - if use_client_cert_env == "false": - is_mtls_mock.return_value = False - ssl_credentials_mock.return_value = None - expected_host = client.DEFAULT_ENDPOINT - expected_ssl_channel_creds = None - else: - is_mtls_mock.return_value = True - ssl_credentials_mock.return_value = mock.Mock() - expected_host = client.DEFAULT_MTLS_ENDPOINT - expected_ssl_channel_creds = ( - ssl_credentials_mock.return_value - ) - - patched.return_value = None - client = client_class() - patched.assert_called_once_with( - credentials=None, - credentials_file=None, - host=expected_host, - scopes=None, - ssl_channel_credentials=expected_ssl_channel_creds, - quota_project_id=None, - client_info=transports.base.DEFAULT_CLIENT_INFO, - ) + "google.auth.transport.mtls.default_client_cert_source", + return_value=client_cert_source_callback, + ): + if use_client_cert_env == "false": + expected_host = client.DEFAULT_ENDPOINT + expected_client_cert_source = None + else: + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_client_cert_source = client_cert_source_callback - # Check the case client_cert_source and ADC client cert are not provided. - with mock.patch.dict( - os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} - ): - with mock.patch.object(transport_class, "__init__") as patched: - with mock.patch( - "google.auth.transport.grpc.SslCredentials.__init__", return_value=None - ): - with mock.patch( - "google.auth.transport.grpc.SslCredentials.is_mtls", - new_callable=mock.PropertyMock, - ) as is_mtls_mock: - is_mtls_mock.return_value = False patched.return_value = None client = client_class() patched.assert_called_once_with( credentials=None, credentials_file=None, - host=client.DEFAULT_ENDPOINT, + host=expected_host, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=expected_client_cert_source, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=False, + ): + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + @pytest.mark.parametrize( "client_class,transport_class,transport_name", @@ -396,7 +395,7 @@ def test_instance_admin_client_client_options_scopes( credentials_file=None, host=client.DEFAULT_ENDPOINT, scopes=["1", "2"], - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -426,7 +425,7 @@ def test_instance_admin_client_client_options_credentials_file( credentials_file="credentials.json", host=client.DEFAULT_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -445,7 +444,7 @@ def test_instance_admin_client_client_options_from_dict(): credentials_file=None, host="squid.clam.whelk", scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -3053,6 +3052,54 @@ def test_instance_admin_transport_auth_adc(): ) +@pytest.mark.parametrize( + "transport_class", + [ + transports.InstanceAdminGrpcTransport, + transports.InstanceAdminGrpcAsyncIOTransport, + ], +) +def test_instance_admin_grpc_transport_client_cert_source_for_mtls(transport_class): + cred = credentials.AnonymousCredentials() + + # Check ssl_channel_credentials is used if provided. + with mock.patch.object(transport_class, "create_channel") as mock_create_channel: + mock_ssl_channel_creds = mock.Mock() + transport_class( + host="squid.clam.whelk", + credentials=cred, + ssl_channel_credentials=mock_ssl_channel_creds, + ) + mock_create_channel.assert_called_once_with( + "squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + ssl_credentials=mock_ssl_channel_creds, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + # Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls + # is used. + with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()): + with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred: + transport_class( + credentials=cred, + client_cert_source_for_mtls=client_cert_source_callback, + ) + expected_cert, expected_key = client_cert_source_callback() + mock_ssl_cred.assert_called_once_with( + certificate_chain=expected_cert, private_key=expected_key + ) + + def test_instance_admin_host_no_port(): client = InstanceAdminClient( credentials=credentials.AnonymousCredentials(), @@ -3074,7 +3121,7 @@ def test_instance_admin_host_with_port(): def test_instance_admin_grpc_transport_channel(): - channel = grpc.insecure_channel("http://localhost/") + channel = grpc.secure_channel("http://localhost/", grpc.local_channel_credentials()) # Check that channel is used if provided. transport = transports.InstanceAdminGrpcTransport( @@ -3086,7 +3133,7 @@ def test_instance_admin_grpc_transport_channel(): def test_instance_admin_grpc_asyncio_transport_channel(): - channel = aio.insecure_channel("http://localhost/") + channel = aio.secure_channel("http://localhost/", grpc.local_channel_credentials()) # Check that channel is used if provided. transport = transports.InstanceAdminGrpcAsyncIOTransport( @@ -3097,6 +3144,8 @@ def test_instance_admin_grpc_asyncio_transport_channel(): assert transport._ssl_channel_credentials == None +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. @pytest.mark.parametrize( "transport_class", [ @@ -3109,7 +3158,7 @@ def test_instance_admin_transport_channel_mtls_with_client_cert_source(transport "grpc.ssl_channel_credentials", autospec=True ) as grpc_ssl_channel_cred: with mock.patch.object( - transport_class, "create_channel", autospec=True + transport_class, "create_channel" ) as grpc_create_channel: mock_ssl_cred = mock.Mock() grpc_ssl_channel_cred.return_value = mock_ssl_cred @@ -3150,6 +3199,8 @@ def test_instance_admin_transport_channel_mtls_with_client_cert_source(transport assert transport._ssl_channel_credentials == mock_ssl_cred +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. @pytest.mark.parametrize( "transport_class", [ @@ -3165,7 +3216,7 @@ def test_instance_admin_transport_channel_mtls_with_adc(transport_class): ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), ): with mock.patch.object( - transport_class, "create_channel", autospec=True + transport_class, "create_channel" ) as grpc_create_channel: mock_grpc_channel = mock.Mock() grpc_create_channel.return_value = mock_grpc_channel diff --git a/tests/unit/gapic/spanner_v1/test_spanner.py b/tests/unit/gapic/spanner_v1/test_spanner.py index 2bb2324fac..56d3818009 100644 --- a/tests/unit/gapic/spanner_v1/test_spanner.py +++ b/tests/unit/gapic/spanner_v1/test_spanner.py @@ -87,7 +87,20 @@ def test__get_default_mtls_endpoint(): assert SpannerClient._get_default_mtls_endpoint(non_googleapi) == non_googleapi -@pytest.mark.parametrize("client_class", [SpannerClient, SpannerAsyncClient]) +def test_spanner_client_from_service_account_info(): + creds = credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_info" + ) as factory: + factory.return_value = creds + info = {"valid": True} + client = SpannerClient.from_service_account_info(info) + assert client.transport._credentials == creds + + assert client.transport._host == "spanner.googleapis.com:443" + + +@pytest.mark.parametrize("client_class", [SpannerClient, SpannerAsyncClient,]) def test_spanner_client_from_service_account_file(client_class): creds = credentials.AnonymousCredentials() with mock.patch.object( @@ -105,7 +118,10 @@ def test_spanner_client_from_service_account_file(client_class): def test_spanner_client_get_transport_class(): transport = SpannerClient.get_transport_class() - assert transport == transports.SpannerGrpcTransport + available_transports = [ + transports.SpannerGrpcTransport, + ] + assert transport in available_transports transport = SpannerClient.get_transport_class("grpc") assert transport == transports.SpannerGrpcTransport @@ -146,7 +162,7 @@ def test_spanner_client_client_options(client_class, transport_class, transport_ credentials_file=None, host="squid.clam.whelk", scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -162,7 +178,7 @@ def test_spanner_client_client_options(client_class, transport_class, transport_ credentials_file=None, host=client.DEFAULT_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -178,7 +194,7 @@ def test_spanner_client_client_options(client_class, transport_class, transport_ credentials_file=None, host=client.DEFAULT_MTLS_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -206,7 +222,7 @@ def test_spanner_client_client_options(client_class, transport_class, transport_ credentials_file=None, host=client.DEFAULT_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id="octopus", client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -253,29 +269,25 @@ def test_spanner_client_mtls_env_auto( client_cert_source=client_cert_source_callback ) with mock.patch.object(transport_class, "__init__") as patched: - ssl_channel_creds = mock.Mock() - with mock.patch( - "grpc.ssl_channel_credentials", return_value=ssl_channel_creds - ): - patched.return_value = None - client = client_class(client_options=options) + patched.return_value = None + client = client_class(client_options=options) - if use_client_cert_env == "false": - expected_ssl_channel_creds = None - expected_host = client.DEFAULT_ENDPOINT - else: - expected_ssl_channel_creds = ssl_channel_creds - expected_host = client.DEFAULT_MTLS_ENDPOINT + if use_client_cert_env == "false": + expected_client_cert_source = None + expected_host = client.DEFAULT_ENDPOINT + else: + expected_client_cert_source = client_cert_source_callback + expected_host = client.DEFAULT_MTLS_ENDPOINT - patched.assert_called_once_with( - credentials=None, - credentials_file=None, - host=expected_host, - scopes=None, - ssl_channel_credentials=expected_ssl_channel_creds, - quota_project_id=None, - client_info=transports.base.DEFAULT_CLIENT_INFO, - ) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + client_cert_source_for_mtls=expected_client_cert_source, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) # Check the case ADC client cert is provided. Whether client cert is used depends on # GOOGLE_API_USE_CLIENT_CERTIFICATE value. @@ -284,66 +296,53 @@ def test_spanner_client_mtls_env_auto( ): with mock.patch.object(transport_class, "__init__") as patched: with mock.patch( - "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=True, ): with mock.patch( - "google.auth.transport.grpc.SslCredentials.is_mtls", - new_callable=mock.PropertyMock, - ) as is_mtls_mock: - with mock.patch( - "google.auth.transport.grpc.SslCredentials.ssl_credentials", - new_callable=mock.PropertyMock, - ) as ssl_credentials_mock: - if use_client_cert_env == "false": - is_mtls_mock.return_value = False - ssl_credentials_mock.return_value = None - expected_host = client.DEFAULT_ENDPOINT - expected_ssl_channel_creds = None - else: - is_mtls_mock.return_value = True - ssl_credentials_mock.return_value = mock.Mock() - expected_host = client.DEFAULT_MTLS_ENDPOINT - expected_ssl_channel_creds = ( - ssl_credentials_mock.return_value - ) - - patched.return_value = None - client = client_class() - patched.assert_called_once_with( - credentials=None, - credentials_file=None, - host=expected_host, - scopes=None, - ssl_channel_credentials=expected_ssl_channel_creds, - quota_project_id=None, - client_info=transports.base.DEFAULT_CLIENT_INFO, - ) + "google.auth.transport.mtls.default_client_cert_source", + return_value=client_cert_source_callback, + ): + if use_client_cert_env == "false": + expected_host = client.DEFAULT_ENDPOINT + expected_client_cert_source = None + else: + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_client_cert_source = client_cert_source_callback - # Check the case client_cert_source and ADC client cert are not provided. - with mock.patch.dict( - os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} - ): - with mock.patch.object(transport_class, "__init__") as patched: - with mock.patch( - "google.auth.transport.grpc.SslCredentials.__init__", return_value=None - ): - with mock.patch( - "google.auth.transport.grpc.SslCredentials.is_mtls", - new_callable=mock.PropertyMock, - ) as is_mtls_mock: - is_mtls_mock.return_value = False patched.return_value = None client = client_class() patched.assert_called_once_with( credentials=None, credentials_file=None, - host=client.DEFAULT_ENDPOINT, + host=expected_host, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=expected_client_cert_source, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=False, + ): + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + @pytest.mark.parametrize( "client_class,transport_class,transport_name", @@ -365,7 +364,7 @@ def test_spanner_client_client_options_scopes( credentials_file=None, host=client.DEFAULT_ENDPOINT, scopes=["1", "2"], - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -391,7 +390,7 @@ def test_spanner_client_client_options_credentials_file( credentials_file="credentials.json", host=client.DEFAULT_ENDPOINT, scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -408,7 +407,7 @@ def test_spanner_client_client_options_from_dict(): credentials_file=None, host="squid.clam.whelk", scopes=None, - ssl_channel_credentials=None, + client_cert_source_for_mtls=None, quota_project_id=None, client_info=transports.base.DEFAULT_CLIENT_INFO, ) @@ -3038,7 +3037,7 @@ def test_transport_get_channel(): @pytest.mark.parametrize( "transport_class", - [transports.SpannerGrpcTransport, transports.SpannerGrpcAsyncIOTransport], + [transports.SpannerGrpcTransport, transports.SpannerGrpcAsyncIOTransport,], ) def test_transport_adc(transport_class): # Test default credentials are used if not provided. @@ -3161,6 +3160,51 @@ def test_spanner_transport_auth_adc(): ) +@pytest.mark.parametrize( + "transport_class", + [transports.SpannerGrpcTransport, transports.SpannerGrpcAsyncIOTransport], +) +def test_spanner_grpc_transport_client_cert_source_for_mtls(transport_class): + cred = credentials.AnonymousCredentials() + + # Check ssl_channel_credentials is used if provided. + with mock.patch.object(transport_class, "create_channel") as mock_create_channel: + mock_ssl_channel_creds = mock.Mock() + transport_class( + host="squid.clam.whelk", + credentials=cred, + ssl_channel_credentials=mock_ssl_channel_creds, + ) + mock_create_channel.assert_called_once_with( + "squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.data", + ), + ssl_credentials=mock_ssl_channel_creds, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + # Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls + # is used. + with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()): + with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred: + transport_class( + credentials=cred, + client_cert_source_for_mtls=client_cert_source_callback, + ) + expected_cert, expected_key = client_cert_source_callback() + mock_ssl_cred.assert_called_once_with( + certificate_chain=expected_cert, private_key=expected_key + ) + + def test_spanner_host_no_port(): client = SpannerClient( credentials=credentials.AnonymousCredentials(), @@ -3182,7 +3226,7 @@ def test_spanner_host_with_port(): def test_spanner_grpc_transport_channel(): - channel = grpc.insecure_channel("http://localhost/") + channel = grpc.secure_channel("http://localhost/", grpc.local_channel_credentials()) # Check that channel is used if provided. transport = transports.SpannerGrpcTransport( @@ -3194,7 +3238,7 @@ def test_spanner_grpc_transport_channel(): def test_spanner_grpc_asyncio_transport_channel(): - channel = aio.insecure_channel("http://localhost/") + channel = aio.secure_channel("http://localhost/", grpc.local_channel_credentials()) # Check that channel is used if provided. transport = transports.SpannerGrpcAsyncIOTransport( @@ -3205,6 +3249,8 @@ def test_spanner_grpc_asyncio_transport_channel(): assert transport._ssl_channel_credentials == None +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. @pytest.mark.parametrize( "transport_class", [transports.SpannerGrpcTransport, transports.SpannerGrpcAsyncIOTransport], @@ -3214,7 +3260,7 @@ def test_spanner_transport_channel_mtls_with_client_cert_source(transport_class) "grpc.ssl_channel_credentials", autospec=True ) as grpc_ssl_channel_cred: with mock.patch.object( - transport_class, "create_channel", autospec=True + transport_class, "create_channel" ) as grpc_create_channel: mock_ssl_cred = mock.Mock() grpc_ssl_channel_cred.return_value = mock_ssl_cred @@ -3255,6 +3301,8 @@ def test_spanner_transport_channel_mtls_with_client_cert_source(transport_class) assert transport._ssl_channel_credentials == mock_ssl_cred +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. @pytest.mark.parametrize( "transport_class", [transports.SpannerGrpcTransport, transports.SpannerGrpcAsyncIOTransport], @@ -3267,7 +3315,7 @@ def test_spanner_transport_channel_mtls_with_adc(transport_class): ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), ): with mock.patch.object( - transport_class, "create_channel", autospec=True + transport_class, "create_channel" ) as grpc_create_channel: mock_grpc_channel = mock.Mock() grpc_create_channel.return_value = mock_grpc_channel From 4afea77812e021859377216cd950e1d9fc965ba8 Mon Sep 17 00:00:00 2001 From: HemangChothani <50404902+HemangChothani@users.noreply.github.com> Date: Mon, 15 Feb 2021 05:54:13 -0500 Subject: [PATCH 08/16] fix: connection attribute of connection class and include related unit tests (#228) --- google/cloud/spanner_dbapi/cursor.py | 4 +- tests/unit/spanner_dbapi/test_cursor.py | 251 ++++++++++++++++++++++++ 2 files changed, 253 insertions(+), 2 deletions(-) diff --git a/google/cloud/spanner_dbapi/cursor.py b/google/cloud/spanner_dbapi/cursor.py index 4b5a0d9652..707bf617af 100644 --- a/google/cloud/spanner_dbapi/cursor.py +++ b/google/cloud/spanner_dbapi/cursor.py @@ -279,7 +279,7 @@ def fetchall(self): self._checksum.consume_result(row) res.append(row) except Aborted: - self._connection.retry_transaction() + self.connection.retry_transaction() return self.fetchall() return res @@ -310,7 +310,7 @@ def fetchmany(self, size=None): except StopIteration: break except Aborted: - self._connection.retry_transaction() + self.connection.retry_transaction() return self.fetchmany(size) return items diff --git a/tests/unit/spanner_dbapi/test_cursor.py b/tests/unit/spanner_dbapi/test_cursor.py index 9f0510c4ab..c83dcb5e10 100644 --- a/tests/unit/spanner_dbapi/test_cursor.py +++ b/tests/unit/spanner_dbapi/test_cursor.py @@ -315,6 +315,22 @@ def test_fetchone(self): self.assertEqual(cursor.fetchone(), lst[i]) self.assertIsNone(cursor.fetchone()) + @unittest.skipIf( + sys.version_info[0] < 3, "Python 2 has an outdated iterator definition" + ) + def test_fetchone_w_autocommit(self): + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + connection.autocommit = True + cursor = self._make_one(connection) + cursor._checksum = ResultsChecksum() + lst = [1, 2, 3] + cursor._itr = iter(lst) + for i in range(len(lst)): + self.assertEqual(cursor.fetchone(), lst[i]) + self.assertIsNone(cursor.fetchone()) + def test_fetchmany(self): from google.cloud.spanner_dbapi.checksum import ResultsChecksum @@ -329,6 +345,21 @@ def test_fetchmany(self): result = cursor.fetchmany(len(lst)) self.assertEqual(result, lst[1:]) + def test_fetchmany_w_autocommit(self): + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + connection.autocommit = True + cursor = self._make_one(connection) + cursor._checksum = ResultsChecksum() + lst = [(1,), (2,), (3,)] + cursor._itr = iter(lst) + + self.assertEqual(cursor.fetchmany(), [lst[0]]) + + result = cursor.fetchmany(len(lst)) + self.assertEqual(result, lst[1:]) + def test_fetchall(self): from google.cloud.spanner_dbapi.checksum import ResultsChecksum @@ -339,6 +370,17 @@ def test_fetchall(self): cursor._itr = iter(lst) self.assertEqual(cursor.fetchall(), lst) + def test_fetchall_w_autocommit(self): + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + connection.autocommit = True + cursor = self._make_one(connection) + cursor._checksum = ResultsChecksum() + lst = [(1,), (2,), (3,)] + cursor._itr = iter(lst) + self.assertEqual(cursor.fetchall(), lst) + def test_nextset(self): from google.cloud.spanner_dbapi import exceptions @@ -586,3 +628,212 @@ def test_fetchone_retry_aborted_statements_checksums_mismatch(self): cursor.fetchone() run_mock.assert_called_with(statement, retried=True) + + def test_fetchall_retry_aborted(self): + """Check that aborted fetch re-executing transaction.""" + from google.api_core.exceptions import Aborted + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + from google.cloud.spanner_dbapi.connection import connect + + with mock.patch( + "google.cloud.spanner_v1.instance.Instance.exists", return_value=True, + ): + with mock.patch( + "google.cloud.spanner_v1.database.Database.exists", return_value=True, + ): + connection = connect("test-instance", "test-database") + + cursor = connection.cursor() + cursor._checksum = ResultsChecksum() + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__iter__", + side_effect=(Aborted("Aborted"), iter([])), + ): + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection.retry_transaction" + ) as retry_mock: + + cursor.fetchall() + + retry_mock.assert_called_with() + + def test_fetchall_retry_aborted_statements(self): + """Check that retried transaction executing the same statements.""" + from google.api_core.exceptions import Aborted + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + from google.cloud.spanner_dbapi.connection import connect + from google.cloud.spanner_dbapi.cursor import Statement + + row = ["field1", "field2"] + with mock.patch( + "google.cloud.spanner_v1.instance.Instance.exists", return_value=True, + ): + with mock.patch( + "google.cloud.spanner_v1.database.Database.exists", return_value=True, + ): + connection = connect("test-instance", "test-database") + + cursor = connection.cursor() + cursor._checksum = ResultsChecksum() + cursor._checksum.consume_result(row) + + statement = Statement("SELECT 1", [], {}, cursor._checksum, False) + connection._statements.append(statement) + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__iter__", + side_effect=(Aborted("Aborted"), iter(row)), + ): + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection.run_statement", + return_value=([row], ResultsChecksum()), + ) as run_mock: + cursor.fetchall() + + run_mock.assert_called_with(statement, retried=True) + + def test_fetchall_retry_aborted_statements_checksums_mismatch(self): + """Check transaction retrying with underlying data being changed.""" + from google.api_core.exceptions import Aborted + from google.cloud.spanner_dbapi.exceptions import RetryAborted + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + from google.cloud.spanner_dbapi.connection import connect + from google.cloud.spanner_dbapi.cursor import Statement + + row = ["field1", "field2"] + row2 = ["updated_field1", "field2"] + + with mock.patch( + "google.cloud.spanner_v1.instance.Instance.exists", return_value=True, + ): + with mock.patch( + "google.cloud.spanner_v1.database.Database.exists", return_value=True, + ): + connection = connect("test-instance", "test-database") + + cursor = connection.cursor() + cursor._checksum = ResultsChecksum() + cursor._checksum.consume_result(row) + + statement = Statement("SELECT 1", [], {}, cursor._checksum, False) + connection._statements.append(statement) + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__iter__", + side_effect=(Aborted("Aborted"), iter(row)), + ): + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection.run_statement", + return_value=([row2], ResultsChecksum()), + ) as run_mock: + + with self.assertRaises(RetryAborted): + cursor.fetchall() + + run_mock.assert_called_with(statement, retried=True) + + def test_fetchmany_retry_aborted(self): + """Check that aborted fetch re-executing transaction.""" + from google.api_core.exceptions import Aborted + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + from google.cloud.spanner_dbapi.connection import connect + + with mock.patch( + "google.cloud.spanner_v1.instance.Instance.exists", return_value=True, + ): + with mock.patch( + "google.cloud.spanner_v1.database.Database.exists", return_value=True, + ): + connection = connect("test-instance", "test-database") + + cursor = connection.cursor() + cursor._checksum = ResultsChecksum() + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__next__", + side_effect=(Aborted("Aborted"), None), + ): + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection.retry_transaction" + ) as retry_mock: + + cursor.fetchmany() + + retry_mock.assert_called_with() + + def test_fetchmany_retry_aborted_statements(self): + """Check that retried transaction executing the same statements.""" + from google.api_core.exceptions import Aborted + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + from google.cloud.spanner_dbapi.connection import connect + from google.cloud.spanner_dbapi.cursor import Statement + + row = ["field1", "field2"] + with mock.patch( + "google.cloud.spanner_v1.instance.Instance.exists", return_value=True, + ): + with mock.patch( + "google.cloud.spanner_v1.database.Database.exists", return_value=True, + ): + connection = connect("test-instance", "test-database") + + cursor = connection.cursor() + cursor._checksum = ResultsChecksum() + cursor._checksum.consume_result(row) + + statement = Statement("SELECT 1", [], {}, cursor._checksum, False) + connection._statements.append(statement) + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__next__", + side_effect=(Aborted("Aborted"), None), + ): + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection.run_statement", + return_value=([row], ResultsChecksum()), + ) as run_mock: + + cursor.fetchmany(len(row)) + + run_mock.assert_called_with(statement, retried=True) + + def test_fetchmany_retry_aborted_statements_checksums_mismatch(self): + """Check transaction retrying with underlying data being changed.""" + from google.api_core.exceptions import Aborted + from google.cloud.spanner_dbapi.exceptions import RetryAborted + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + from google.cloud.spanner_dbapi.connection import connect + from google.cloud.spanner_dbapi.cursor import Statement + + row = ["field1", "field2"] + row2 = ["updated_field1", "field2"] + + with mock.patch( + "google.cloud.spanner_v1.instance.Instance.exists", return_value=True, + ): + with mock.patch( + "google.cloud.spanner_v1.database.Database.exists", return_value=True, + ): + connection = connect("test-instance", "test-database") + + cursor = connection.cursor() + cursor._checksum = ResultsChecksum() + cursor._checksum.consume_result(row) + + statement = Statement("SELECT 1", [], {}, cursor._checksum, False) + connection._statements.append(statement) + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__next__", + side_effect=(Aborted("Aborted"), None), + ): + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection.run_statement", + return_value=([row2], ResultsChecksum()), + ) as run_mock: + + with self.assertRaises(RetryAborted): + cursor.fetchmany(len(row)) + + run_mock.assert_called_with(statement, retried=True) From 0375914342de98e3903bae2097142325028d18d9 Mon Sep 17 00:00:00 2001 From: Ilya Gurov Date: Mon, 15 Feb 2021 14:19:06 +0300 Subject: [PATCH 09/16] fix(db_api): add dummy lastrowid attribute (#227) --- google/cloud/spanner_dbapi/cursor.py | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/google/cloud/spanner_dbapi/cursor.py b/google/cloud/spanner_dbapi/cursor.py index 707bf617af..dd097d5fc5 100644 --- a/google/cloud/spanner_dbapi/cursor.py +++ b/google/cloud/spanner_dbapi/cursor.py @@ -56,6 +56,7 @@ def __init__(self, connection): self._itr = None self._result_set = None self._row_count = _UNSET_COUNT + self.lastrowid = None self.connection = connection self._is_closed = False # the currently running SQL statement results checksum @@ -89,7 +90,10 @@ def description(self): :rtype: tuple :returns: A tuple of columns' information. """ - if not (self._result_set and self._result_set.metadata): + if not self._result_set: + return None + + if not getattr(self._result_set, "metadata", None): return None row_type = self._result_set.metadata.row_type From 539f14533afd348a328716aa511d453ca3bb19f5 Mon Sep 17 00:00:00 2001 From: larkee <31196561+larkee@users.noreply.github.com> Date: Wed, 17 Feb 2021 13:06:26 +1100 Subject: [PATCH 10/16] fix: use datetime timezone info when generating timestamp strings (#236) * fix: use datetime timezone when generating timestamp strings * style: fix lint Co-authored-by: larkee --- google/cloud/spanner_v1/_helpers.py | 2 +- tests/unit/test__helpers.py | 12 ++++++++++++ 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/google/cloud/spanner_v1/_helpers.py b/google/cloud/spanner_v1/_helpers.py index 4ac13f7c6b..79a387eac6 100644 --- a/google/cloud/spanner_v1/_helpers.py +++ b/google/cloud/spanner_v1/_helpers.py @@ -118,7 +118,7 @@ def _make_value_pb(value): if isinstance(value, datetime_helpers.DatetimeWithNanoseconds): return Value(string_value=value.rfc3339()) if isinstance(value, datetime.datetime): - return Value(string_value=_datetime_to_rfc3339(value)) + return Value(string_value=_datetime_to_rfc3339(value, ignore_zone=False)) if isinstance(value, datetime.date): return Value(string_value=value.isoformat()) if isinstance(value, six.binary_type): diff --git a/tests/unit/test__helpers.py b/tests/unit/test__helpers.py index 5d6b015505..d554f3f717 100644 --- a/tests/unit/test__helpers.py +++ b/tests/unit/test__helpers.py @@ -215,6 +215,18 @@ def test_w_datetime(self): self.assertIsInstance(value_pb, Value) self.assertEqual(value_pb.string_value, datetime_helpers.to_rfc3339(now)) + def test_w_timestamp_w_tz(self): + import datetime + import pytz + from google.protobuf.struct_pb2 import Value + + when = datetime.datetime( + 2021, 2, 8, 0, 0, 0, tzinfo=pytz.timezone("US/Mountain") + ) + value_pb = self._callFUT(when) + self.assertIsInstance(value_pb, Value) + self.assertEqual(value_pb.string_value, "2021-02-08T07:00:00.000000Z") + def test_w_numeric(self): import decimal from google.protobuf.struct_pb2 import Value From 36b12a7b53cdbedf543d2b3bb132fb9e13cefb65 Mon Sep 17 00:00:00 2001 From: HemangChothani <50404902+HemangChothani@users.noreply.github.com> Date: Wed, 17 Feb 2021 21:30:52 -0500 Subject: [PATCH 11/16] fix: fix execute insert for homogeneous statement (#233) --- google/cloud/spanner_dbapi/parse_utils.py | 16 +++---- tests/unit/spanner_dbapi/test_connection.py | 19 ++++++++ tests/unit/spanner_dbapi/test_parse_utils.py | 48 +++++++------------- 3 files changed, 42 insertions(+), 41 deletions(-) diff --git a/google/cloud/spanner_dbapi/parse_utils.py b/google/cloud/spanner_dbapi/parse_utils.py index abc36b397c..f76689fdf2 100644 --- a/google/cloud/spanner_dbapi/parse_utils.py +++ b/google/cloud/spanner_dbapi/parse_utils.py @@ -306,19 +306,15 @@ def parse_insert(insert_sql, params): # Case c) columns = [mi.strip(" `") for mi in match.group("columns").split(",")] - sql_params_list = [] - insert_sql_preamble = "INSERT INTO %s (%s) VALUES %s" % ( - match.group("table_name"), - match.group("columns"), - values.argv[0], - ) values_pyformat = [str(arg) for arg in values.argv] rows_list = rows_for_insert_or_update(columns, params, values_pyformat) - insert_sql_preamble = sanitize_literals_for_upload(insert_sql_preamble) - for row in rows_list: - sql_params_list.append((insert_sql_preamble, row)) - return {"sql_params_list": sql_params_list} + return { + "homogenous": True, + "table": match.group("table_name"), + "columns": columns, + "values": rows_list, + } # Case d) # insert_sql is of the form: diff --git a/tests/unit/spanner_dbapi/test_connection.py b/tests/unit/spanner_dbapi/test_connection.py index a338055a2c..f70e7fe669 100644 --- a/tests/unit/spanner_dbapi/test_connection.py +++ b/tests/unit/spanner_dbapi/test_connection.py @@ -379,6 +379,25 @@ def test_run_statement_dont_remember_retried_statements(self): self.assertEqual(len(connection._statements), 0) + def test_run_statement_w_homogeneous_insert_statements(self): + """Check that Connection executed homogeneous insert statements.""" + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + from google.cloud.spanner_dbapi.cursor import Statement + + sql = "INSERT INTO T (f1, f2) VALUES (%s, %s), (%s, %s)" + params = ["a", "b", "c", "d"] + param_types = {"f1": str, "f2": str} + + connection = self._make_connection() + + statement = Statement(sql, params, param_types, ResultsChecksum(), True) + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection.transaction_checkout" + ): + connection.run_statement(statement, retried=True) + + self.assertEqual(len(connection._statements), 0) + def test_clear_statements_on_commit(self): """ Check that all the saved statements are diff --git a/tests/unit/spanner_dbapi/test_parse_utils.py b/tests/unit/spanner_dbapi/test_parse_utils.py index 3713ac11a8..6338f39e5d 100644 --- a/tests/unit/spanner_dbapi/test_parse_utils.py +++ b/tests/unit/spanner_dbapi/test_parse_utils.py @@ -72,32 +72,20 @@ def test_parse_insert(self): "INSERT INTO django_migrations (app, name, applied) VALUES (%s, %s, %s)", [1, 2, 3, 4, 5, 6], { - "sql_params_list": [ - ( - "INSERT INTO django_migrations (app, name, applied) VALUES (%s, %s, %s)", - (1, 2, 3), - ), - ( - "INSERT INTO django_migrations (app, name, applied) VALUES (%s, %s, %s)", - (4, 5, 6), - ), - ] + "homogenous": True, + "table": "django_migrations", + "columns": ["app", "name", "applied"], + "values": [(1, 2, 3), (4, 5, 6)], }, ), ( "INSERT INTO django_migrations(app, name, applied) VALUES (%s, %s, %s)", [1, 2, 3, 4, 5, 6], { - "sql_params_list": [ - ( - "INSERT INTO django_migrations (app, name, applied) VALUES (%s, %s, %s)", - (1, 2, 3), - ), - ( - "INSERT INTO django_migrations (app, name, applied) VALUES (%s, %s, %s)", - (4, 5, 6), - ), - ] + "homogenous": True, + "table": "django_migrations", + "columns": ["app", "name", "applied"], + "values": [(1, 2, 3), (4, 5, 6)], }, ), ( @@ -118,25 +106,23 @@ def test_parse_insert(self): ), ( "INSERT INTO ap (n, ct, cn) " - "VALUES (%s, %s, %s), (%s, %s, %s), (%s, %s, %s),(%s, %s, %s)", + "VALUES (%s, %s, %s), (%s, %s, %s), (%s, %s, %s),(%s,%s, %s)", (1, 2, 3, 4, 5, 6, 7, 8, 9), { - "sql_params_list": [ - ("INSERT INTO ap (n, ct, cn) VALUES (%s, %s, %s)", (1, 2, 3)), - ("INSERT INTO ap (n, ct, cn) VALUES (%s, %s, %s)", (4, 5, 6)), - ("INSERT INTO ap (n, ct, cn) VALUES (%s, %s, %s)", (7, 8, 9)), - ] + "homogenous": True, + "table": "ap", + "columns": ["n", "ct", "cn"], + "values": [(1, 2, 3), (4, 5, 6), (7, 8, 9)], }, ), ( "INSERT INTO `no` (`yes`) VALUES (%s)", (1, 4, 5), { - "sql_params_list": [ - ("INSERT INTO `no` (`yes`) VALUES (%s)", (1,)), - ("INSERT INTO `no` (`yes`) VALUES (%s)", (4,)), - ("INSERT INTO `no` (`yes`) VALUES (%s)", (5,)), - ] + "homogenous": True, + "table": "`no`", + "columns": ["yes"], + "values": [(1,), (4,), (5,)], }, ), ( From a082e5d7d2195ab9429a8e0bef4a664b59fdf771 Mon Sep 17 00:00:00 2001 From: larkee <31196561+larkee@users.noreply.github.com> Date: Mon, 22 Feb 2021 12:34:54 +1100 Subject: [PATCH 12/16] feat: add support for Point In Time Recovery (PITR) (#148) * feat: add PITR-lite support * fix: remove unneeded conversion for earliest_version_time * test: fix list_databases list comprehension * feat: add support for PITR-lite backups (#1) * Backup changes * Basic tests * Add system tests * Fix system tests * Add retention period to backup systests * style: fix lint errors * test: fix failing backup system tests (#2) * Remove unnecessary retention period setting * Fix systests * Review changes (#3) * Remove unnecessary retention period setting * Fix systests * Review changes * style: fix lint Co-authored-by: larkee Co-authored-by: Zoe --- google/cloud/spanner_v1/backup.py | 28 +++- google/cloud/spanner_v1/database.py | 23 +++ google/cloud/spanner_v1/instance.py | 22 ++- tests/system/test_system.py | 217 +++++++++++++++++++++++++++- tests/unit/test_backup.py | 21 ++- tests/unit/test_database.py | 18 +++ 6 files changed, 318 insertions(+), 11 deletions(-) diff --git a/google/cloud/spanner_v1/backup.py b/google/cloud/spanner_v1/backup.py index 405a9e2be2..2277a33fce 100644 --- a/google/cloud/spanner_v1/backup.py +++ b/google/cloud/spanner_v1/backup.py @@ -51,14 +51,23 @@ class Backup(object): :param expire_time: (Optional) The expire time that will be used to create the backup. Required if the create method needs to be called. + + :type version_time: :class:`datetime.datetime` + :param version_time: (Optional) The version time that was specified for + the externally consistent copy of the database. If + not present, it is the same as the `create_time` of + the backup. """ - def __init__(self, backup_id, instance, database="", expire_time=None): + def __init__( + self, backup_id, instance, database="", expire_time=None, version_time=None + ): self.backup_id = backup_id self._instance = instance self._database = database self._expire_time = expire_time self._create_time = None + self._version_time = version_time self._size_bytes = None self._state = None self._referencing_databases = None @@ -109,6 +118,16 @@ def create_time(self): """ return self._create_time + @property + def version_time(self): + """Version time of this backup. + + :rtype: :class:`datetime.datetime` + :returns: a datetime object representing the version time of + this backup + """ + return self._version_time + @property def size_bytes(self): """Size of this backup in bytes. @@ -190,7 +209,11 @@ def create(self): raise ValueError("database not set") api = self._instance._client.database_admin_api metadata = _metadata_with_prefix(self.name) - backup = BackupPB(database=self._database, expire_time=self.expire_time,) + backup = BackupPB( + database=self._database, + expire_time=self.expire_time, + version_time=self.version_time, + ) future = api.create_backup( parent=self._instance.name, @@ -228,6 +251,7 @@ def reload(self): self._database = pb.database self._expire_time = pb.expire_time self._create_time = pb.create_time + self._version_time = pb.version_time self._size_bytes = pb.size_bytes self._state = BackupPB.State(pb.state) self._referencing_databases = pb.referencing_databases diff --git a/google/cloud/spanner_v1/database.py b/google/cloud/spanner_v1/database.py index c1c7953648..7a89ccdb3e 100644 --- a/google/cloud/spanner_v1/database.py +++ b/google/cloud/spanner_v1/database.py @@ -107,6 +107,8 @@ def __init__(self, database_id, instance, ddl_statements=(), pool=None): self._state = None self._create_time = None self._restore_info = None + self._version_retention_period = None + self._earliest_version_time = None if pool is None: pool = BurstyPool() @@ -204,6 +206,25 @@ def restore_info(self): """ return self._restore_info + @property + def version_retention_period(self): + """The period in which Cloud Spanner retains all versions of data + for the database. + + :rtype: str + :returns: a string representing the duration of the version retention period + """ + return self._version_retention_period + + @property + def earliest_version_time(self): + """The earliest time at which older versions of the data can be read. + + :rtype: :class:`datetime.datetime` + :returns: a datetime object representing the earliest version time + """ + return self._earliest_version_time + @property def ddl_statements(self): """DDL Statements used to define database schema. @@ -313,6 +334,8 @@ def reload(self): self._state = DatabasePB.State(response.state) self._create_time = response.create_time self._restore_info = response.restore_info + self._version_retention_period = response.version_retention_period + self._earliest_version_time = response.earliest_version_time def update_ddl(self, ddl_statements, operation_id=""): """Update DDL for this database. diff --git a/google/cloud/spanner_v1/instance.py b/google/cloud/spanner_v1/instance.py index b422c57afd..ffaed41c91 100644 --- a/google/cloud/spanner_v1/instance.py +++ b/google/cloud/spanner_v1/instance.py @@ -400,7 +400,7 @@ def list_databases(self, page_size=None): ) return page_iter - def backup(self, backup_id, database="", expire_time=None): + def backup(self, backup_id, database="", expire_time=None, version_time=None): """Factory to create a backup within this instance. :type backup_id: str @@ -415,13 +415,29 @@ def backup(self, backup_id, database="", expire_time=None): :param expire_time: Optional. The expire time that will be used when creating the backup. Required if the create method needs to be called. + + :type version_time: :class:`datetime.datetime` + :param version_time: + Optional. The version time that will be used to create the externally + consistent copy of the database. If not present, it is the same as + the `create_time` of the backup. """ try: return Backup( - backup_id, self, database=database.name, expire_time=expire_time + backup_id, + self, + database=database.name, + expire_time=expire_time, + version_time=version_time, ) except AttributeError: - return Backup(backup_id, self, database=database, expire_time=expire_time) + return Backup( + backup_id, + self, + database=database, + expire_time=expire_time, + version_time=version_time, + ) def list_backups(self, filter_="", page_size=None): """List backups for the instance. diff --git a/tests/system/test_system.py b/tests/system/test_system.py index 90031a3e3a..86be97d3eb 100644 --- a/tests/system/test_system.py +++ b/tests/system/test_system.py @@ -355,6 +355,62 @@ def test_create_database(self): database_ids = [database.name for database in Config.INSTANCE.list_databases()] self.assertIn(temp_db.name, database_ids) + @unittest.skipIf( + USE_EMULATOR, "PITR-lite features are not supported by the emulator" + ) + def test_create_database_pitr_invalid_retention_period(self): + pool = BurstyPool(labels={"testcase": "create_database_pitr"}) + temp_db_id = "temp_db" + unique_resource_id("_") + retention_period = "0d" + ddl_statements = [ + "ALTER DATABASE {}" + " SET OPTIONS (version_retention_period = '{}')".format( + temp_db_id, retention_period + ) + ] + temp_db = Config.INSTANCE.database( + temp_db_id, pool=pool, ddl_statements=ddl_statements + ) + with self.assertRaises(exceptions.InvalidArgument): + temp_db.create() + + @unittest.skipIf( + USE_EMULATOR, "PITR-lite features are not supported by the emulator" + ) + def test_create_database_pitr_success(self): + pool = BurstyPool(labels={"testcase": "create_database_pitr"}) + temp_db_id = "temp_db" + unique_resource_id("_") + retention_period = "7d" + ddl_statements = [ + "ALTER DATABASE {}" + " SET OPTIONS (version_retention_period = '{}')".format( + temp_db_id, retention_period + ) + ] + temp_db = Config.INSTANCE.database( + temp_db_id, pool=pool, ddl_statements=ddl_statements + ) + operation = temp_db.create() + self.to_delete.append(temp_db) + + # We want to make sure the operation completes. + operation.result(30) # raises on failure / timeout. + + database_ids = [database.name for database in Config.INSTANCE.list_databases()] + self.assertIn(temp_db.name, database_ids) + + temp_db.reload() + self.assertEqual(temp_db.version_retention_period, retention_period) + + with temp_db.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT OPTION_VALUE AS version_retention_period " + "FROM INFORMATION_SCHEMA.DATABASE_OPTIONS " + "WHERE SCHEMA_NAME = '' AND OPTION_NAME = 'version_retention_period'" + ) + for result in results: + self.assertEqual(result[0], retention_period) + def test_table_not_found(self): temp_db_id = "temp_db" + unique_resource_id("_") @@ -407,6 +463,62 @@ def test_update_database_ddl_with_operation_id(self): self.assertEqual(len(temp_db.ddl_statements), len(ddl_statements)) + @unittest.skipIf( + USE_EMULATOR, "PITR-lite features are not supported by the emulator" + ) + def test_update_database_ddl_pitr_invalid(self): + pool = BurstyPool(labels={"testcase": "update_database_ddl_pitr"}) + temp_db_id = "temp_db" + unique_resource_id("_") + retention_period = "0d" + temp_db = Config.INSTANCE.database(temp_db_id, pool=pool) + create_op = temp_db.create() + self.to_delete.append(temp_db) + + # We want to make sure the operation completes. + create_op.result(240) # raises on failure / timeout. + + self.assertIsNone(temp_db.version_retention_period) + + ddl_statements = DDL_STATEMENTS + [ + "ALTER DATABASE {}" + " SET OPTIONS (version_retention_period = '{}')".format( + temp_db_id, retention_period + ) + ] + with self.assertRaises(exceptions.InvalidArgument): + temp_db.update_ddl(ddl_statements) + + @unittest.skipIf( + USE_EMULATOR, "PITR-lite features are not supported by the emulator" + ) + def test_update_database_ddl_pitr_success(self): + pool = BurstyPool(labels={"testcase": "update_database_ddl_pitr"}) + temp_db_id = "temp_db" + unique_resource_id("_") + retention_period = "7d" + temp_db = Config.INSTANCE.database(temp_db_id, pool=pool) + create_op = temp_db.create() + self.to_delete.append(temp_db) + + # We want to make sure the operation completes. + create_op.result(240) # raises on failure / timeout. + + self.assertIsNone(temp_db.version_retention_period) + + ddl_statements = DDL_STATEMENTS + [ + "ALTER DATABASE {}" + " SET OPTIONS (version_retention_period = '{}')".format( + temp_db_id, retention_period + ) + ] + operation = temp_db.update_ddl(ddl_statements) + + # We want to make sure the operation completes. + operation.result(240) # raises on failure / timeout. + + temp_db.reload() + self.assertEqual(temp_db.version_retention_period, retention_period) + self.assertEqual(len(temp_db.ddl_statements), len(ddl_statements)) + def test_db_batch_insert_then_db_snapshot_read(self): retry = RetryInstanceState(_has_all_ddl) retry(self._db.reload)() @@ -486,6 +598,8 @@ class TestBackupAPI(unittest.TestCase, _TestData): @classmethod def setUpClass(cls): + from datetime import datetime + pool = BurstyPool(labels={"testcase": "database_api"}) ddl_statements = EMULATOR_DDL_STATEMENTS if USE_EMULATOR else DDL_STATEMENTS db1 = Config.INSTANCE.database( @@ -498,6 +612,7 @@ def setUpClass(cls): op2 = db2.create() op1.result(SPANNER_OPERATION_TIMEOUT_IN_SECONDS) # raises on failure / timeout. op2.result(SPANNER_OPERATION_TIMEOUT_IN_SECONDS) # raises on failure / timeout. + cls.database_version_time = datetime.utcnow().replace(tzinfo=UTC) current_config = Config.INSTANCE.configuration_name same_config_instance_id = "same-config" + unique_resource_id("-") @@ -573,7 +688,12 @@ def test_backup_workflow(self): expire_time = expire_time.replace(tzinfo=UTC) # Create backup. - backup = instance.backup(backup_id, database=self._db, expire_time=expire_time) + backup = instance.backup( + backup_id, + database=self._db, + expire_time=expire_time, + version_time=self.database_version_time, + ) operation = backup.create() self.to_delete.append(backup) @@ -588,6 +708,7 @@ def test_backup_workflow(self): self.assertEqual(self._db.name, backup._database) self.assertEqual(expire_time, backup.expire_time) self.assertIsNotNone(backup.create_time) + self.assertEqual(self.database_version_time, backup.version_time) self.assertIsNotNone(backup.size_bytes) self.assertIsNotNone(backup.state) @@ -602,12 +723,92 @@ def test_backup_workflow(self): database = instance.database(restored_id) self.to_drop.append(database) operation = database.restore(source=backup) - operation.result() + restored_db = operation.result() + self.assertEqual( + self.database_version_time, restored_db.restore_info.backup_info.create_time + ) + + metadata = operation.metadata + self.assertEqual(self.database_version_time, metadata.backup_info.create_time) database.drop() backup.delete() self.assertFalse(backup.exists()) + def test_backup_version_time_defaults_to_create_time(self): + from datetime import datetime + from datetime import timedelta + from pytz import UTC + + instance = Config.INSTANCE + backup_id = "backup_id" + unique_resource_id("_") + expire_time = datetime.utcnow() + timedelta(days=3) + expire_time = expire_time.replace(tzinfo=UTC) + + # Create backup. + backup = instance.backup(backup_id, database=self._db, expire_time=expire_time,) + operation = backup.create() + self.to_delete.append(backup) + + # Check metadata. + metadata = operation.metadata + self.assertEqual(backup.name, metadata.name) + self.assertEqual(self._db.name, metadata.database) + operation.result() + + # Check backup object. + backup.reload() + self.assertEqual(self._db.name, backup._database) + self.assertIsNotNone(backup.create_time) + self.assertEqual(backup.create_time, backup.version_time) + + backup.delete() + self.assertFalse(backup.exists()) + + def test_create_backup_invalid_version_time_past(self): + from datetime import datetime + from datetime import timedelta + from pytz import UTC + + backup_id = "backup_id" + unique_resource_id("_") + expire_time = datetime.utcnow() + timedelta(days=3) + expire_time = expire_time.replace(tzinfo=UTC) + version_time = datetime.utcnow() - timedelta(days=10) + version_time = version_time.replace(tzinfo=UTC) + + backup = Config.INSTANCE.backup( + backup_id, + database=self._db, + expire_time=expire_time, + version_time=version_time, + ) + + with self.assertRaises(exceptions.InvalidArgument): + op = backup.create() + op.result() + + def test_create_backup_invalid_version_time_future(self): + from datetime import datetime + from datetime import timedelta + from pytz import UTC + + backup_id = "backup_id" + unique_resource_id("_") + expire_time = datetime.utcnow() + timedelta(days=3) + expire_time = expire_time.replace(tzinfo=UTC) + version_time = datetime.utcnow() + timedelta(days=2) + version_time = version_time.replace(tzinfo=UTC) + + backup = Config.INSTANCE.backup( + backup_id, + database=self._db, + expire_time=expire_time, + version_time=version_time, + ) + + with self.assertRaises(exceptions.InvalidArgument): + op = backup.create() + op.result() + def test_restore_to_diff_instance(self): from datetime import datetime from datetime import timedelta @@ -706,7 +907,10 @@ def test_list_backups(self): expire_time_1 = expire_time_1.replace(tzinfo=UTC) backup1 = Config.INSTANCE.backup( - backup_id_1, database=self._dbs[0], expire_time=expire_time_1 + backup_id_1, + database=self._dbs[0], + expire_time=expire_time_1, + version_time=self.database_version_time, ) expire_time_2 = datetime.utcnow() + timedelta(days=1) @@ -746,6 +950,13 @@ def test_list_backups(self): for backup in instance.list_backups(filter_=filter_): self.assertEqual(backup.name, backup2.name) + # List backups filtered by version time. + filter_ = 'version_time > "{0}"'.format( + create_time_compare.strftime("%Y-%m-%dT%H:%M:%S.%fZ") + ) + for backup in instance.list_backups(filter_=filter_): + self.assertEqual(backup.name, backup2.name) + # List backups filtered by expire time. filter_ = 'expire_time > "{0}"'.format( expire_time_1.strftime("%Y-%m-%dT%H:%M:%S.%fZ") diff --git a/tests/unit/test_backup.py b/tests/unit/test_backup.py index 748c460291..bf6ce68a84 100644 --- a/tests/unit/test_backup.py +++ b/tests/unit/test_backup.py @@ -266,6 +266,9 @@ def test_create_database_not_set(self): def test_create_success(self): from google.cloud.spanner_admin_database_v1 import Backup + from datetime import datetime + from datetime import timedelta + from pytz import UTC op_future = object() client = _Client() @@ -273,12 +276,22 @@ def test_create_success(self): api.create_backup.return_value = op_future instance = _Instance(self.INSTANCE_NAME, client=client) - timestamp = self._make_timestamp() + version_timestamp = datetime.utcnow() - timedelta(minutes=5) + version_timestamp = version_timestamp.replace(tzinfo=UTC) + expire_timestamp = self._make_timestamp() backup = self._make_one( - self.BACKUP_ID, instance, database=self.DATABASE_NAME, expire_time=timestamp + self.BACKUP_ID, + instance, + database=self.DATABASE_NAME, + expire_time=expire_timestamp, + version_time=version_timestamp, ) - backup_pb = Backup(database=self.DATABASE_NAME, expire_time=timestamp,) + backup_pb = Backup( + database=self.DATABASE_NAME, + expire_time=expire_timestamp, + version_time=version_timestamp, + ) future = backup.create() self.assertIs(future, op_future) @@ -437,6 +450,7 @@ def test_reload_success(self): name=self.BACKUP_NAME, database=self.DATABASE_NAME, expire_time=timestamp, + version_time=timestamp, create_time=timestamp, size_bytes=10, state=1, @@ -452,6 +466,7 @@ def test_reload_success(self): self.assertEqual(backup.database, self.DATABASE_NAME) self.assertEqual(backup.expire_time, timestamp) self.assertEqual(backup.create_time, timestamp) + self.assertEqual(backup.version_time, timestamp) self.assertEqual(backup.size_bytes, 10) self.assertEqual(backup.state, Backup.State.CREATING) self.assertEqual(backup.referencing_databases, []) diff --git a/tests/unit/test_database.py b/tests/unit/test_database.py index 175c269d50..a2a5b84b2f 100644 --- a/tests/unit/test_database.py +++ b/tests/unit/test_database.py @@ -249,6 +249,20 @@ def test_restore_info(self): ) self.assertEqual(database.restore_info, restore_info) + def test_version_retention_period(self): + instance = _Instance(self.INSTANCE_NAME) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + version_retention_period = database._version_retention_period = "1d" + self.assertEqual(database.version_retention_period, version_retention_period) + + def test_earliest_version_time(self): + instance = _Instance(self.INSTANCE_NAME) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + earliest_version_time = database._earliest_version_time = self._make_timestamp() + self.assertEqual(database.earliest_version_time, earliest_version_time) + def test_spanner_api_property_w_scopeless_creds(self): client = _Client() @@ -581,6 +595,8 @@ def test_reload_success(self): state=2, create_time=_datetime_to_pb_timestamp(timestamp), restore_info=restore_info, + version_retention_period="1d", + earliest_version_time=_datetime_to_pb_timestamp(timestamp), ) api.get_database.return_value = db_pb instance = _Instance(self.INSTANCE_NAME, client=client) @@ -591,6 +607,8 @@ def test_reload_success(self): self.assertEqual(database._state, Database.State.READY) self.assertEqual(database._create_time, timestamp) self.assertEqual(database._restore_info, restore_info) + self.assertEqual(database._version_retention_period, "1d") + self.assertEqual(database._earliest_version_time, timestamp) self.assertEqual(database._ddl_statements, tuple(DDL_STATEMENTS)) api.get_database_ddl.assert_called_once_with( From fd14b13c79acbba073b4a0ec6cff799e407c2820 Mon Sep 17 00:00:00 2001 From: larkee <31196561+larkee@users.noreply.github.com> Date: Tue, 23 Feb 2021 14:24:02 +1100 Subject: [PATCH 13/16] test: fix PITR restored database version time assertion (#238) This PR fixes the assertion to use `metadata.backup_info.version_time` instead of `metadata.backup_info.create_time`. It looks it was passing before the backend correctly supported it and I forgot to re-run the tests before merging #148 (whoops!) and so it is currently failing and preventing #205 from being merged: https://source.cloud.google.com/results/invocations/8f0f5dab-1b35-4ce3-bb72-0ce9e79ab89d/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-spanner%2Fpresubmit%2Fpresubmit/log --- test.py | 53 +++++++++++++++++++++++++++++++++++++ tests/system/test_system.py | 5 ++-- 2 files changed, 56 insertions(+), 2 deletions(-) create mode 100644 test.py diff --git a/test.py b/test.py new file mode 100644 index 0000000000..7888bbd090 --- /dev/null +++ b/test.py @@ -0,0 +1,53 @@ +import base64 +import time +from google.cloud import spanner +from google.auth.credentials import AnonymousCredentials + +instance_id = 'test-instance' +database_id = 'test-db' + +spanner_client = spanner.Client( + project='test-project', + client_options={"api_endpoint": 'localhost:9010'}, + credentials=AnonymousCredentials() +) + +instance = spanner_client.instance(instance_id) +op = instance.create() +op.result() + +database = instance.database(database_id, ddl_statements=[ + "CREATE TABLE Test (id STRING(36) NOT NULL, megafield BYTES(MAX)) PRIMARY KEY (id)" +]) +op = database.create() +op.result() + +# This must be large enough that the SDK will split the megafield payload across two query chunks +# and try to recombine them, causing the error: +data = base64.standard_b64encode(("a" * 1000000).encode("utf8")) + +try: + with database.batch() as batch: + batch.insert( + table="Test", + columns=("id", "megafield"), + values=[ + (1, data), + ], + ) + + with database.snapshot() as snapshot: + toc = time.time() + results = snapshot.execute_sql( + "SELECT * FROM Test" + ) + tic = time.time() + + print("TIME: ", tic - toc) + + for row in results: + print("Id: ", row[0]) + print("Megafield: ", row[1][:100]) +finally: + database.drop() + instance.delete() \ No newline at end of file diff --git a/tests/system/test_system.py b/tests/system/test_system.py index 86be97d3eb..6d337e96fb 100644 --- a/tests/system/test_system.py +++ b/tests/system/test_system.py @@ -725,11 +725,12 @@ def test_backup_workflow(self): operation = database.restore(source=backup) restored_db = operation.result() self.assertEqual( - self.database_version_time, restored_db.restore_info.backup_info.create_time + self.database_version_time, + restored_db.restore_info.backup_info.version_time, ) metadata = operation.metadata - self.assertEqual(self.database_version_time, metadata.backup_info.create_time) + self.assertEqual(self.database_version_time, metadata.backup_info.version_time) database.drop() backup.delete() From 434967e3a433b6516f5792dcbfef7ba950f091c5 Mon Sep 17 00:00:00 2001 From: larkee <31196561+larkee@users.noreply.github.com> Date: Tue, 23 Feb 2021 15:18:49 +1100 Subject: [PATCH 14/16] feat: add support to log commit stats (#205) * feat: add support for logging commit stats * test: add commit stats to CommitResponse * style: fix lint errors * refactor: remove log formatting * test: update info arg assertions * docs: document logger param * refactor: pass CommitStats via extra kwarg * fix: ensure logger is unused if commit fails Co-authored-by: larkee --- google/cloud/spanner_v1/batch.py | 22 ++- google/cloud/spanner_v1/database.py | 39 ++++- google/cloud/spanner_v1/instance.py | 12 +- google/cloud/spanner_v1/session.py | 7 +- google/cloud/spanner_v1/transaction.py | 23 ++- tests/unit/test_batch.py | 16 +-- tests/unit/test_database.py | 125 +++++++++++++++- tests/unit/test_instance.py | 6 +- tests/unit/test_session.py | 188 ++++++++++++++++++++++--- tests/unit/test_transaction.py | 28 ++-- 10 files changed, 410 insertions(+), 56 deletions(-) diff --git a/google/cloud/spanner_v1/batch.py b/google/cloud/spanner_v1/batch.py index 27cd3c8b58..c04fa6e5a4 100644 --- a/google/cloud/spanner_v1/batch.py +++ b/google/cloud/spanner_v1/batch.py @@ -14,6 +14,7 @@ """Context manager for Cloud Spanner batched writes.""" +from google.cloud.spanner_v1 import CommitRequest from google.cloud.spanner_v1 import Mutation from google.cloud.spanner_v1 import TransactionOptions @@ -123,6 +124,7 @@ class Batch(_BatchBase): """ committed = None + commit_stats = None """Timestamp at which the batch was successfully committed.""" def _check_state(self): @@ -136,9 +138,13 @@ def _check_state(self): if self.committed is not None: raise ValueError("Batch already committed") - def commit(self): + def commit(self, return_commit_stats=False): """Commit mutations to the database. + :type return_commit_stats: bool + :param return_commit_stats: + If true, the response will return commit stats which can be accessed though commit_stats. + :rtype: datetime :returns: timestamp of the committed changes. """ @@ -148,14 +154,16 @@ def commit(self): metadata = _metadata_with_prefix(database.name) txn_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) trace_attributes = {"num_mutations": len(self._mutations)} + request = CommitRequest( + session=self._session.name, + mutations=self._mutations, + single_use_transaction=txn_options, + return_commit_stats=return_commit_stats, + ) with trace_call("CloudSpanner.Commit", self._session, trace_attributes): - response = api.commit( - session=self._session.name, - mutations=self._mutations, - single_use_transaction=txn_options, - metadata=metadata, - ) + response = api.commit(request=request, metadata=metadata,) self.committed = response.commit_timestamp + self.commit_stats = response.commit_stats return self.committed def __enter__(self): diff --git a/google/cloud/spanner_v1/database.py b/google/cloud/spanner_v1/database.py index 7a89ccdb3e..1b3448439c 100644 --- a/google/cloud/spanner_v1/database.py +++ b/google/cloud/spanner_v1/database.py @@ -17,6 +17,7 @@ import copy import functools import grpc +import logging import re import threading @@ -95,11 +96,19 @@ class Database(object): :param pool: (Optional) session pool to be used by database. If not passed, the database will construct an instance of :class:`~google.cloud.spanner_v1.pool.BurstyPool`. + + :type logger: `logging.Logger` + :param logger: (Optional) a custom logger that is used if `log_commit_stats` + is `True` to log commit statistics. If not passed, a logger + will be created when needed that will log the commit statistics + to stdout. """ _spanner_api = None - def __init__(self, database_id, instance, ddl_statements=(), pool=None): + def __init__( + self, database_id, instance, ddl_statements=(), pool=None, logger=None + ): self.database_id = database_id self._instance = instance self._ddl_statements = _check_ddl_statements(ddl_statements) @@ -109,6 +118,8 @@ def __init__(self, database_id, instance, ddl_statements=(), pool=None): self._restore_info = None self._version_retention_period = None self._earliest_version_time = None + self.log_commit_stats = False + self._logger = logger if pool is None: pool = BurstyPool() @@ -237,6 +248,25 @@ def ddl_statements(self): """ return self._ddl_statements + @property + def logger(self): + """Logger used by the database. + + The default logger will log commit stats at the log level INFO using + `sys.stderr`. + + :rtype: :class:`logging.Logger` or `None` + :returns: the logger + """ + if self._logger is None: + self._logger = logging.getLogger(self.name) + self._logger.setLevel(logging.INFO) + + ch = logging.StreamHandler() + ch.setLevel(logging.INFO) + self._logger.addHandler(ch) + return self._logger + @property def spanner_api(self): """Helper for session-related API calls.""" @@ -647,8 +677,13 @@ def __exit__(self, exc_type, exc_val, exc_tb): """End ``with`` block.""" try: if exc_type is None: - self._batch.commit() + self._batch.commit(return_commit_stats=self._database.log_commit_stats) finally: + if self._database.log_commit_stats and self._batch.commit_stats: + self._database.logger.info( + "CommitStats: {}".format(self._batch.commit_stats), + extra={"commit_stats": self._batch.commit_stats}, + ) self._database._pool.put(self._session) diff --git a/google/cloud/spanner_v1/instance.py b/google/cloud/spanner_v1/instance.py index ffaed41c91..de464efe2e 100644 --- a/google/cloud/spanner_v1/instance.py +++ b/google/cloud/spanner_v1/instance.py @@ -357,7 +357,7 @@ def delete(self): api.delete_instance(name=self.name, metadata=metadata) - def database(self, database_id, ddl_statements=(), pool=None): + def database(self, database_id, ddl_statements=(), pool=None, logger=None): """Factory to create a database within this instance. :type database_id: str @@ -371,10 +371,18 @@ def database(self, database_id, ddl_statements=(), pool=None): :class:`~google.cloud.spanner_v1.pool.AbstractSessionPool`. :param pool: (Optional) session pool to be used by database. + :type logger: `logging.Logger` + :param logger: (Optional) a custom logger that is used if `log_commit_stats` + is `True` to log commit statistics. If not passed, a logger + will be created when needed that will log the commit statistics + to stdout. + :rtype: :class:`~google.cloud.spanner_v1.database.Database` :returns: a database owned by this instance. """ - return Database(database_id, self, ddl_statements=ddl_statements, pool=pool) + return Database( + database_id, self, ddl_statements=ddl_statements, pool=pool, logger=logger + ) def list_databases(self, page_size=None): """List databases for the instance. diff --git a/google/cloud/spanner_v1/session.py b/google/cloud/spanner_v1/session.py index 8b33221cf9..4bec436d7d 100644 --- a/google/cloud/spanner_v1/session.py +++ b/google/cloud/spanner_v1/session.py @@ -349,7 +349,7 @@ def run_in_transaction(self, func, *args, **kw): raise try: - txn.commit() + txn.commit(return_commit_stats=self._database.log_commit_stats) except Aborted as exc: del self._transaction _delay_until_retry(exc, deadline, attempts) @@ -357,6 +357,11 @@ def run_in_transaction(self, func, *args, **kw): del self._transaction raise else: + if self._database.log_commit_stats and txn.commit_stats: + self._database.logger.info( + "CommitStats: {}".format(txn.commit_stats), + extra={"commit_stats": txn.commit_stats}, + ) return return_value diff --git a/google/cloud/spanner_v1/transaction.py b/google/cloud/spanner_v1/transaction.py index 51d5826f41..aa2353206f 100644 --- a/google/cloud/spanner_v1/transaction.py +++ b/google/cloud/spanner_v1/transaction.py @@ -21,6 +21,7 @@ _merge_query_options, _metadata_with_prefix, ) +from google.cloud.spanner_v1 import CommitRequest from google.cloud.spanner_v1 import ExecuteBatchDmlRequest from google.cloud.spanner_v1 import ExecuteSqlRequest from google.cloud.spanner_v1 import TransactionSelector @@ -42,6 +43,7 @@ class Transaction(_SnapshotBase, _BatchBase): committed = None """Timestamp at which the transaction was successfully committed.""" rolled_back = False + commit_stats = None _multi_use = True _execute_sql_count = 0 @@ -119,9 +121,13 @@ def rollback(self): self.rolled_back = True del self._session._transaction - def commit(self): + def commit(self, return_commit_stats=False): """Commit mutations to the database. + :type return_commit_stats: bool + :param return_commit_stats: + If true, the response will return commit stats which can be accessed though commit_stats. + :rtype: datetime :returns: timestamp of the committed changes. :raises ValueError: if there are no mutations to commit. @@ -132,14 +138,17 @@ def commit(self): api = database.spanner_api metadata = _metadata_with_prefix(database.name) trace_attributes = {"num_mutations": len(self._mutations)} + request = CommitRequest( + session=self._session.name, + mutations=self._mutations, + transaction_id=self._transaction_id, + return_commit_stats=return_commit_stats, + ) with trace_call("CloudSpanner.Commit", self._session, trace_attributes): - response = api.commit( - session=self._session.name, - mutations=self._mutations, - transaction_id=self._transaction_id, - metadata=metadata, - ) + response = api.commit(request=request, metadata=metadata,) self.committed = response.commit_timestamp + if return_commit_stats: + self.commit_stats = response.commit_stats del self._session._transaction return self.committed diff --git a/tests/unit/test_batch.py b/tests/unit/test_batch.py index 7c87f8a82a..187d44913f 100644 --- a/tests/unit/test_batch.py +++ b/tests/unit/test_batch.py @@ -339,17 +339,17 @@ def __init__(self, **kwargs): self.__dict__.update(**kwargs) def commit( - self, - session, - mutations, - transaction_id="", - single_use_transaction=None, - metadata=None, + self, request=None, metadata=None, ): from google.api_core.exceptions import Unknown - assert transaction_id == "" - self._committed = (session, mutations, single_use_transaction, metadata) + assert request.transaction_id == b"" + self._committed = ( + request.session, + request.mutations, + request.single_use_transaction, + metadata, + ) if self._rpc_error: raise Unknown("error") return self._commit_response diff --git a/tests/unit/test_database.py b/tests/unit/test_database.py index a2a5b84b2f..4a7d18e67b 100644 --- a/tests/unit/test_database.py +++ b/tests/unit/test_database.py @@ -104,6 +104,8 @@ def test_ctor_defaults(self): self.assertIs(database._instance, instance) self.assertEqual(list(database.ddl_statements), []) self.assertIsInstance(database._pool, BurstyPool) + self.assertFalse(database.log_commit_stats) + self.assertIsNone(database._logger) # BurstyPool does not create sessions during 'bind()'. self.assertTrue(database._pool._sessions.empty()) @@ -145,6 +147,18 @@ def test_ctor_w_ddl_statements_ok(self): self.assertIs(database._instance, instance) self.assertEqual(list(database.ddl_statements), DDL_STATEMENTS) + def test_ctor_w_explicit_logger(self): + from logging import Logger + + instance = _Instance(self.INSTANCE_NAME) + logger = mock.create_autospec(Logger, instance=True) + database = self._make_one(self.DATABASE_ID, instance, logger=logger) + self.assertEqual(database.database_id, self.DATABASE_ID) + self.assertIs(database._instance, instance) + self.assertEqual(list(database.ddl_statements), []) + self.assertFalse(database.log_commit_stats) + self.assertEqual(database._logger, logger) + def test_from_pb_bad_database_name(self): from google.cloud.spanner_admin_database_v1 import Database @@ -263,6 +277,24 @@ def test_earliest_version_time(self): earliest_version_time = database._earliest_version_time = self._make_timestamp() self.assertEqual(database.earliest_version_time, earliest_version_time) + def test_logger_property_default(self): + import logging + + instance = _Instance(self.INSTANCE_NAME) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + logger = logging.getLogger(database.name) + self.assertEqual(database.logger, logger) + + def test_logger_property_custom(self): + import logging + + instance = _Instance(self.INSTANCE_NAME) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + logger = database._logger = mock.create_autospec(logging.Logger, instance=True) + self.assertEqual(database.logger, logger) + def test_spanner_api_property_w_scopeless_creds(self): client = _Client() @@ -1281,6 +1313,7 @@ def test_ctor(self): def test_context_mgr_success(self): import datetime + from google.cloud.spanner_v1 import CommitRequest from google.cloud.spanner_v1 import CommitResponse from google.cloud.spanner_v1 import TransactionOptions from google.cloud._helpers import UTC @@ -1308,12 +1341,97 @@ def test_context_mgr_success(self): expected_txn_options = TransactionOptions(read_write={}) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=[], + single_use_transaction=expected_txn_options, + ) api.commit.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", database.name)], + ) + + def test_context_mgr_w_commit_stats_success(self): + import datetime + from google.cloud.spanner_v1 import CommitRequest + from google.cloud.spanner_v1 import CommitResponse + from google.cloud.spanner_v1 import TransactionOptions + from google.cloud._helpers import UTC + from google.cloud._helpers import _datetime_to_pb_timestamp + from google.cloud.spanner_v1.batch import Batch + + now = datetime.datetime.utcnow().replace(tzinfo=UTC) + now_pb = _datetime_to_pb_timestamp(now) + commit_stats = CommitResponse.CommitStats(mutation_count=4) + response = CommitResponse(commit_timestamp=now_pb, commit_stats=commit_stats) + database = _Database(self.DATABASE_NAME) + database.log_commit_stats = True + api = database.spanner_api = self._make_spanner_client() + api.commit.return_value = response + pool = database._pool = _Pool() + session = _Session(database) + pool.put(session) + checkout = self._make_one(database) + + with checkout as batch: + self.assertIsNone(pool._session) + self.assertIsInstance(batch, Batch) + self.assertIs(batch._session, session) + + self.assertIs(pool._session, session) + self.assertEqual(batch.committed, now) + + expected_txn_options = TransactionOptions(read_write={}) + + request = CommitRequest( session=self.SESSION_NAME, mutations=[], single_use_transaction=expected_txn_options, - metadata=[("google-cloud-resource-prefix", database.name)], + return_commit_stats=True, + ) + api.commit.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", database.name)], + ) + + database.logger.info.assert_called_once_with( + "CommitStats: mutation_count: 4\n", extra={"commit_stats": commit_stats} + ) + + def test_context_mgr_w_commit_stats_error(self): + from google.api_core.exceptions import Unknown + from google.cloud.spanner_v1 import CommitRequest + from google.cloud.spanner_v1 import TransactionOptions + from google.cloud.spanner_v1.batch import Batch + + database = _Database(self.DATABASE_NAME) + database.log_commit_stats = True + api = database.spanner_api = self._make_spanner_client() + api.commit.side_effect = Unknown("testing") + pool = database._pool = _Pool() + session = _Session(database) + pool.put(session) + checkout = self._make_one(database) + + with self.assertRaises(Unknown): + with checkout as batch: + self.assertIsNone(pool._session) + self.assertIsInstance(batch, Batch) + self.assertIs(batch._session, session) + + self.assertIs(pool._session, session) + + expected_txn_options = TransactionOptions(read_write={}) + + request = CommitRequest( + session=self.SESSION_NAME, + mutations=[], + single_use_transaction=expected_txn_options, + return_commit_stats=True, ) + api.commit.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", database.name)], + ) + + database.logger.info.assert_not_called() def test_context_mgr_failure(self): from google.cloud.spanner_v1.batch import Batch @@ -1901,10 +2019,15 @@ def __init__(self, name): class _Database(object): + log_commit_stats = False + def __init__(self, name, instance=None): self.name = name self.database_id = name.rsplit("/", 1)[1] self._instance = instance + from logging import Logger + + self.logger = mock.create_autospec(Logger, instance=True) class _Pool(object): diff --git a/tests/unit/test_instance.py b/tests/unit/test_instance.py index edd8249c67..c1d02c5728 100644 --- a/tests/unit/test_instance.py +++ b/tests/unit/test_instance.py @@ -484,10 +484,12 @@ def test_database_factory_defaults(self): self.assertIs(database._instance, instance) self.assertEqual(list(database.ddl_statements), []) self.assertIsInstance(database._pool, BurstyPool) + self.assertIsNone(database._logger) pool = database._pool self.assertIs(pool._database, database) def test_database_factory_explicit(self): + from logging import Logger from google.cloud.spanner_v1.database import Database from tests._fixtures import DDL_STATEMENTS @@ -495,9 +497,10 @@ def test_database_factory_explicit(self): instance = self._make_one(self.INSTANCE_ID, client, self.CONFIG_NAME) DATABASE_ID = "database-id" pool = _Pool() + logger = mock.create_autospec(Logger, instance=True) database = instance.database( - DATABASE_ID, ddl_statements=DDL_STATEMENTS, pool=pool + DATABASE_ID, ddl_statements=DDL_STATEMENTS, pool=pool, logger=logger ) self.assertIsInstance(database, Database) @@ -505,6 +508,7 @@ def test_database_factory_explicit(self): self.assertIs(database._instance, instance) self.assertEqual(list(database.ddl_statements), DDL_STATEMENTS) self.assertIs(database._pool, pool) + self.assertIs(database._logger, logger) self.assertIs(pool._bound, database) def test_list_databases(self): diff --git a/tests/unit/test_session.py b/tests/unit/test_session.py index 0a004e3cd0..f80b360b96 100644 --- a/tests/unit/test_session.py +++ b/tests/unit/test_session.py @@ -65,6 +65,7 @@ def _make_database(name=DATABASE_NAME): database = mock.create_autospec(Database, instance=True) database.name = name + database.log_commit_stats = False return database @staticmethod @@ -769,6 +770,7 @@ def unit_of_work(txn, *args, **kw): def test_run_in_transaction_w_args_w_kwargs_wo_abort(self): import datetime + from google.cloud.spanner_v1 import CommitRequest from google.cloud.spanner_v1 import CommitResponse from google.cloud.spanner_v1 import ( Transaction as TransactionPB, @@ -820,15 +822,18 @@ def unit_of_work(txn, *args, **kw): options=expected_options, metadata=[("google-cloud-resource-prefix", database.name)], ) - gax_api.commit.assert_called_once_with( + request = CommitRequest( session=self.SESSION_NAME, mutations=txn._mutations, transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], + ) + gax_api.commit.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", database.name)], ) def test_run_in_transaction_w_commit_error(self): from google.api_core.exceptions import Unknown + from google.cloud.spanner_v1 import CommitRequest from google.cloud.spanner_v1.transaction import Transaction TABLE_NAME = "citizens" @@ -867,16 +872,19 @@ def unit_of_work(txn, *args, **kw): self.assertEqual(kw, {}) gax_api.begin_transaction.assert_not_called() - gax_api.commit.assert_called_once_with( + request = CommitRequest( session=self.SESSION_NAME, mutations=txn._mutations, transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], + ) + gax_api.commit.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", database.name)], ) def test_run_in_transaction_w_abort_no_retry_metadata(self): import datetime from google.api_core.exceptions import Aborted + from google.cloud.spanner_v1 import CommitRequest from google.cloud.spanner_v1 import CommitResponse from google.cloud.spanner_v1 import ( Transaction as TransactionPB, @@ -934,13 +942,16 @@ def unit_of_work(txn, *args, **kw): ] * 2, ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + ) self.assertEqual( gax_api.commit.call_args_list, [ mock.call( - session=self.SESSION_NAME, - mutations=txn._mutations, - transaction_id=TRANSACTION_ID, + request=request, metadata=[("google-cloud-resource-prefix", database.name)], ) ] @@ -952,6 +963,7 @@ def test_run_in_transaction_w_abort_w_retry_metadata(self): from google.api_core.exceptions import Aborted from google.protobuf.duration_pb2 import Duration from google.rpc.error_details_pb2 import RetryInfo + from google.cloud.spanner_v1 import CommitRequest from google.cloud.spanner_v1 import CommitResponse from google.cloud.spanner_v1 import ( Transaction as TransactionPB, @@ -1022,13 +1034,16 @@ def unit_of_work(txn, *args, **kw): ] * 2, ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + ) self.assertEqual( gax_api.commit.call_args_list, [ mock.call( - session=self.SESSION_NAME, - mutations=txn._mutations, - transaction_id=TRANSACTION_ID, + request=request, metadata=[("google-cloud-resource-prefix", database.name)], ) ] @@ -1040,6 +1055,7 @@ def test_run_in_transaction_w_callback_raises_abort_wo_metadata(self): from google.api_core.exceptions import Aborted from google.protobuf.duration_pb2 import Duration from google.rpc.error_details_pb2 import RetryInfo + from google.cloud.spanner_v1 import CommitRequest from google.cloud.spanner_v1 import CommitResponse from google.cloud.spanner_v1 import ( Transaction as TransactionPB, @@ -1110,11 +1126,13 @@ def unit_of_work(txn, *args, **kw): ] * 2, ) - gax_api.commit.assert_called_once_with( + request = CommitRequest( session=self.SESSION_NAME, mutations=txn._mutations, transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], + ) + gax_api.commit.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", database.name)], ) def test_run_in_transaction_w_abort_w_retry_metadata_deadline(self): @@ -1122,6 +1140,7 @@ def test_run_in_transaction_w_abort_w_retry_metadata_deadline(self): from google.api_core.exceptions import Aborted from google.protobuf.duration_pb2 import Duration from google.rpc.error_details_pb2 import RetryInfo + from google.cloud.spanner_v1 import CommitRequest from google.cloud.spanner_v1 import CommitResponse from google.cloud.spanner_v1 import ( Transaction as TransactionPB, @@ -1197,15 +1216,18 @@ def _time(_results=[1, 1.5]): options=expected_options, metadata=[("google-cloud-resource-prefix", database.name)], ) - gax_api.commit.assert_called_once_with( + request = CommitRequest( session=self.SESSION_NAME, mutations=txn._mutations, transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], + ) + gax_api.commit.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", database.name)], ) def test_run_in_transaction_w_timeout(self): from google.api_core.exceptions import Aborted + from google.cloud.spanner_v1 import CommitRequest from google.cloud.spanner_v1 import ( Transaction as TransactionPB, TransactionOptions, @@ -1275,19 +1297,151 @@ def _time(_results=[1, 2, 4, 8]): ] * 3, ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + ) self.assertEqual( gax_api.commit.call_args_list, [ mock.call( - session=self.SESSION_NAME, - mutations=txn._mutations, - transaction_id=TRANSACTION_ID, + request=request, metadata=[("google-cloud-resource-prefix", database.name)], ) ] * 3, ) + def test_run_in_transaction_w_commit_stats_success(self): + import datetime + from google.cloud.spanner_v1 import CommitRequest + from google.cloud.spanner_v1 import CommitResponse + from google.cloud.spanner_v1 import ( + Transaction as TransactionPB, + TransactionOptions, + ) + from google.cloud._helpers import UTC + from google.cloud._helpers import _datetime_to_pb_timestamp + from google.cloud.spanner_v1.transaction import Transaction + + TABLE_NAME = "citizens" + COLUMNS = ["email", "first_name", "last_name", "age"] + VALUES = [ + ["phred@exammple.com", "Phred", "Phlyntstone", 32], + ["bharney@example.com", "Bharney", "Rhubble", 31], + ] + TRANSACTION_ID = b"FACEDACE" + transaction_pb = TransactionPB(id=TRANSACTION_ID) + now = datetime.datetime.utcnow().replace(tzinfo=UTC) + now_pb = _datetime_to_pb_timestamp(now) + commit_stats = CommitResponse.CommitStats(mutation_count=4) + response = CommitResponse(commit_timestamp=now_pb, commit_stats=commit_stats) + gax_api = self._make_spanner_api() + gax_api.begin_transaction.return_value = transaction_pb + gax_api.commit.return_value = response + database = self._make_database() + database.log_commit_stats = True + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + called_with = [] + + def unit_of_work(txn, *args, **kw): + called_with.append((txn, args, kw)) + txn.insert(TABLE_NAME, COLUMNS, VALUES) + return 42 + + return_value = session.run_in_transaction(unit_of_work, "abc", some_arg="def") + + self.assertIsNone(session._transaction) + self.assertEqual(len(called_with), 1) + txn, args, kw = called_with[0] + self.assertIsInstance(txn, Transaction) + self.assertEqual(return_value, 42) + self.assertEqual(args, ("abc",)) + self.assertEqual(kw, {"some_arg": "def"}) + + expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) + gax_api.begin_transaction.assert_called_once_with( + session=self.SESSION_NAME, + options=expected_options, + metadata=[("google-cloud-resource-prefix", database.name)], + ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + return_commit_stats=True, + ) + gax_api.commit.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", database.name)], + ) + database.logger.info.assert_called_once_with( + "CommitStats: mutation_count: 4\n", extra={"commit_stats": commit_stats} + ) + + def test_run_in_transaction_w_commit_stats_error(self): + from google.api_core.exceptions import Unknown + from google.cloud.spanner_v1 import CommitRequest + from google.cloud.spanner_v1 import ( + Transaction as TransactionPB, + TransactionOptions, + ) + from google.cloud.spanner_v1.transaction import Transaction + + TABLE_NAME = "citizens" + COLUMNS = ["email", "first_name", "last_name", "age"] + VALUES = [ + ["phred@exammple.com", "Phred", "Phlyntstone", 32], + ["bharney@example.com", "Bharney", "Rhubble", 31], + ] + TRANSACTION_ID = b"FACEDACE" + transaction_pb = TransactionPB(id=TRANSACTION_ID) + gax_api = self._make_spanner_api() + gax_api.begin_transaction.return_value = transaction_pb + gax_api.commit.side_effect = Unknown("testing") + database = self._make_database() + database.log_commit_stats = True + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + called_with = [] + + def unit_of_work(txn, *args, **kw): + called_with.append((txn, args, kw)) + txn.insert(TABLE_NAME, COLUMNS, VALUES) + return 42 + + with self.assertRaises(Unknown): + session.run_in_transaction(unit_of_work, "abc", some_arg="def") + + self.assertIsNone(session._transaction) + self.assertEqual(len(called_with), 1) + txn, args, kw = called_with[0] + self.assertIsInstance(txn, Transaction) + self.assertEqual(args, ("abc",)) + self.assertEqual(kw, {"some_arg": "def"}) + + expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) + gax_api.begin_transaction.assert_called_once_with( + session=self.SESSION_NAME, + options=expected_options, + metadata=[("google-cloud-resource-prefix", database.name)], + ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + return_commit_stats=True, + ) + gax_api.commit.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", database.name)], + ) + database.logger.info.assert_not_called() + def test_delay_helper_w_no_delay(self): from google.cloud.spanner_v1.session import _delay_until_retry diff --git a/tests/unit/test_transaction.py b/tests/unit/test_transaction.py index 2c3b45a664..4dc56bfa06 100644 --- a/tests/unit/test_transaction.py +++ b/tests/unit/test_transaction.py @@ -309,7 +309,7 @@ def test_commit_w_other_error(self): attributes=dict(TestTransaction.BASE_ATTRIBUTES, num_mutations=1), ) - def _commit_helper(self, mutate=True): + def _commit_helper(self, mutate=True, return_commit_stats=False): import datetime from google.cloud.spanner_v1 import CommitResponse from google.cloud.spanner_v1.keyset import KeySet @@ -319,6 +319,8 @@ def _commit_helper(self, mutate=True): keys = [[0], [1], [2]] keyset = KeySet(keys=keys) response = CommitResponse(commit_timestamp=now) + if return_commit_stats: + response.commit_stats.mutation_count = 4 database = _Database() api = database.spanner_api = _FauxSpannerAPI(_commit_response=response) session = _Session(database) @@ -328,7 +330,7 @@ def _commit_helper(self, mutate=True): if mutate: transaction.delete(TABLE_NAME, keyset) - transaction.commit() + transaction.commit(return_commit_stats=return_commit_stats) self.assertEqual(transaction.committed, now) self.assertIsNone(session._transaction) @@ -339,6 +341,9 @@ def _commit_helper(self, mutate=True): self.assertEqual(mutations, transaction._mutations) self.assertEqual(metadata, [("google-cloud-resource-prefix", database.name)]) + if return_commit_stats: + self.assertEqual(transaction.commit_stats.mutation_count, 4) + self.assertSpanAttributes( "CloudSpanner.Commit", attributes=dict( @@ -353,6 +358,9 @@ def test_commit_no_mutations(self): def test_commit_w_mutations(self): self._commit_helper(mutate=True) + def test_commit_w_return_commit_stats(self): + self._commit_helper(return_commit_stats=True) + def test__make_params_pb_w_params_wo_param_types(self): session = _Session() transaction = self._make_one(session) @@ -719,13 +727,13 @@ def rollback(self, session=None, transaction_id=None, metadata=None): return self._rollback_response def commit( - self, - session=None, - mutations=None, - transaction_id="", - single_use_transaction=None, - metadata=None, + self, request=None, metadata=None, ): - assert single_use_transaction is None - self._committed = (session, mutations, transaction_id, metadata) + assert not request.single_use_transaction + self._committed = ( + request.session, + request.mutations, + request.transaction_id, + metadata, + ) return self._commit_response From 3e35d4a0217081bcab4ee31b642cd3bff5e6f4b5 Mon Sep 17 00:00:00 2001 From: larkee <31196561+larkee@users.noreply.github.com> Date: Tue, 23 Feb 2021 16:58:17 +1100 Subject: [PATCH 15/16] perf: improve streaming performance (#240) * perf: improve streaming performance by using raw pbs * refactor: remove unused import Co-authored-by: larkee --- google/cloud/spanner_v1/_helpers.py | 77 +++++----- google/cloud/spanner_v1/streamed.py | 73 +++++----- tests/unit/test__helpers.py | 186 ------------------------ tests/unit/test_snapshot.py | 6 +- tests/unit/test_streamed.py | 214 +++++++++++++++------------- 5 files changed, 191 insertions(+), 365 deletions(-) diff --git a/google/cloud/spanner_v1/_helpers.py b/google/cloud/spanner_v1/_helpers.py index 79a387eac6..0f56431cb3 100644 --- a/google/cloud/spanner_v1/_helpers.py +++ b/google/cloud/spanner_v1/_helpers.py @@ -161,41 +161,6 @@ def _make_list_value_pbs(values): # pylint: disable=too-many-branches -def _parse_value(value, field_type): - if value is None: - return None - if field_type.code == TypeCode.STRING: - result = value - elif field_type.code == TypeCode.BYTES: - result = value.encode("utf8") - elif field_type.code == TypeCode.BOOL: - result = value - elif field_type.code == TypeCode.INT64: - result = int(value) - elif field_type.code == TypeCode.FLOAT64: - if isinstance(value, str): - result = float(value) - else: - result = value - elif field_type.code == TypeCode.DATE: - result = _date_from_iso8601_date(value) - elif field_type.code == TypeCode.TIMESTAMP: - DatetimeWithNanoseconds = datetime_helpers.DatetimeWithNanoseconds - result = DatetimeWithNanoseconds.from_rfc3339(value) - elif field_type.code == TypeCode.ARRAY: - result = [_parse_value(item, field_type.array_element_type) for item in value] - elif field_type.code == TypeCode.STRUCT: - result = [ - _parse_value(item, field_type.struct_type.fields[i].type_) - for (i, item) in enumerate(value) - ] - elif field_type.code == TypeCode.NUMERIC: - result = decimal.Decimal(value) - else: - raise ValueError("Unknown type: %s" % (field_type,)) - return result - - def _parse_value_pb(value_pb, field_type): """Convert a Value protobuf to cell data. @@ -209,17 +174,41 @@ def _parse_value_pb(value_pb, field_type): :returns: value extracted from value_pb :raises ValueError: if unknown type is passed """ + type_code = field_type.code if value_pb.HasField("null_value"): return None - if value_pb.HasField("string_value"): - return _parse_value(value_pb.string_value, field_type) - if value_pb.HasField("bool_value"): - return _parse_value(value_pb.bool_value, field_type) - if value_pb.HasField("number_value"): - return _parse_value(value_pb.number_value, field_type) - if value_pb.HasField("list_value"): - return _parse_value(value_pb.list_value, field_type) - raise ValueError("No value set in Value: %s" % (value_pb,)) + if type_code == TypeCode.STRING: + return value_pb.string_value + elif type_code == TypeCode.BYTES: + return value_pb.string_value.encode("utf8") + elif type_code == TypeCode.BOOL: + return value_pb.bool_value + elif type_code == TypeCode.INT64: + return int(value_pb.string_value) + elif type_code == TypeCode.FLOAT64: + if value_pb.HasField("string_value"): + return float(value_pb.string_value) + else: + return value_pb.number_value + elif type_code == TypeCode.DATE: + return _date_from_iso8601_date(value_pb.string_value) + elif type_code == TypeCode.TIMESTAMP: + DatetimeWithNanoseconds = datetime_helpers.DatetimeWithNanoseconds + return DatetimeWithNanoseconds.from_rfc3339(value_pb.string_value) + elif type_code == TypeCode.ARRAY: + return [ + _parse_value_pb(item_pb, field_type.array_element_type) + for item_pb in value_pb.list_value.values + ] + elif type_code == TypeCode.STRUCT: + return [ + _parse_value_pb(item_pb, field_type.struct_type.fields[i].type_) + for (i, item_pb) in enumerate(value_pb.list_value.values) + ] + elif field_type.code == TypeCode.NUMERIC: + return decimal.Decimal(value_pb.string_value) + else: + raise ValueError("Unknown type: %s" % (field_type,)) # pylint: enable=too-many-branches diff --git a/google/cloud/spanner_v1/streamed.py b/google/cloud/spanner_v1/streamed.py index a8b15a8f2b..ec4cb97b9d 100644 --- a/google/cloud/spanner_v1/streamed.py +++ b/google/cloud/spanner_v1/streamed.py @@ -14,12 +14,15 @@ """Wrapper for streaming results.""" +from google.protobuf.struct_pb2 import ListValue +from google.protobuf.struct_pb2 import Value from google.cloud import exceptions +from google.cloud.spanner_v1 import PartialResultSet from google.cloud.spanner_v1 import TypeCode import six # pylint: disable=ungrouped-imports -from google.cloud.spanner_v1._helpers import _parse_value +from google.cloud.spanner_v1._helpers import _parse_value_pb # pylint: enable=ungrouped-imports @@ -88,7 +91,7 @@ def _merge_chunk(self, value): field = self.fields[current_column] merged = _merge_by_type(self._pending_chunk, value, field.type_) self._pending_chunk = None - return _parse_value(merged, field.type_) + return merged def _merge_values(self, values): """Merge values into rows. @@ -96,14 +99,17 @@ def _merge_values(self, values): :type values: list of :class:`~google.protobuf.struct_pb2.Value` :param values: non-chunked values from partial result set. """ - width = len(self.fields) + print(self.fields) + field_types = [field.type_ for field in self.fields] + width = len(field_types) + index = len(self._current_row) for value in values: - index = len(self._current_row) - field = self.fields[index] - self._current_row.append(_parse_value(value, field.type_)) - if len(self._current_row) == width: + self._current_row.append(_parse_value_pb(value, field_types[index])) + index += 1 + if index == width: self._rows.append(self._current_row) self._current_row = [] + index = 0 def _consume_next(self): """Consume the next partial result set from the stream. @@ -111,6 +117,7 @@ def _consume_next(self): Parse the result set into new/existing rows in :attr:`_rows` """ response = six.next(self._response_iterator) + response_pb = PartialResultSet.pb(response) if self._metadata is None: # first response metadata = self._metadata = response.metadata @@ -119,29 +126,27 @@ def _consume_next(self): if source is not None and source._transaction_id is None: source._transaction_id = metadata.transaction.id - if "stats" in response: # last response + if response_pb.HasField("stats"): # last response self._stats = response.stats - values = list(response.values) + values = list(response_pb.values) if self._pending_chunk is not None: values[0] = self._merge_chunk(values[0]) - if response.chunked_value: + if response_pb.chunked_value: self._pending_chunk = values.pop() self._merge_values(values) def __iter__(self): - iter_rows, self._rows[:] = self._rows[:], () while True: - if not iter_rows: - try: - self._consume_next() - except StopIteration: - return - iter_rows, self._rows[:] = self._rows[:], () + iter_rows, self._rows[:] = self._rows[:], () while iter_rows: yield iter_rows.pop(0) + try: + self._consume_next() + except StopIteration: + return def one(self): """Return exactly one result, or raise an exception. @@ -213,9 +218,15 @@ def _unmergeable(lhs, rhs, type_): def _merge_float64(lhs, rhs, type_): # pylint: disable=unused-argument """Helper for '_merge_by_type'.""" - if type(lhs) == str: - return float(lhs + rhs) - array_continuation = type(lhs) == float and type(rhs) == str and rhs == "" + lhs_kind = lhs.WhichOneof("kind") + if lhs_kind == "string_value": + return Value(string_value=lhs.string_value + rhs.string_value) + rhs_kind = rhs.WhichOneof("kind") + array_continuation = ( + lhs_kind == "number_value" + and rhs_kind == "string_value" + and rhs.string_value == "" + ) if array_continuation: return lhs raise Unmergeable(lhs, rhs, type_) @@ -223,7 +234,7 @@ def _merge_float64(lhs, rhs, type_): # pylint: disable=unused-argument def _merge_string(lhs, rhs, type_): # pylint: disable=unused-argument """Helper for '_merge_by_type'.""" - return str(lhs) + str(rhs) + return Value(string_value=lhs.string_value + rhs.string_value) _UNMERGEABLE_TYPES = (TypeCode.BOOL,) @@ -234,17 +245,17 @@ def _merge_array(lhs, rhs, type_): element_type = type_.array_element_type if element_type.code in _UNMERGEABLE_TYPES: # Individual values cannot be merged, just concatenate - lhs.extend(rhs) + lhs.list_value.values.extend(rhs.list_value.values) return lhs + lhs, rhs = list(lhs.list_value.values), list(rhs.list_value.values) # Sanity check: If either list is empty, short-circuit. # This is effectively a no-op. if not len(lhs) or not len(rhs): - lhs.extend(rhs) - return lhs + return Value(list_value=ListValue(values=(lhs + rhs))) first = rhs.pop(0) - if first is None: # can't merge + if first.HasField("null_value"): # can't merge lhs.append(first) else: last = lhs.pop() @@ -255,23 +266,22 @@ def _merge_array(lhs, rhs, type_): lhs.append(first) else: lhs.append(merged) - lhs.extend(rhs) - return lhs + return Value(list_value=ListValue(values=(lhs + rhs))) def _merge_struct(lhs, rhs, type_): """Helper for '_merge_by_type'.""" fields = type_.struct_type.fields + lhs, rhs = list(lhs.list_value.values), list(rhs.list_value.values) # Sanity check: If either list is empty, short-circuit. # This is effectively a no-op. if not len(lhs) or not len(rhs): - lhs.extend(rhs) - return lhs + return Value(list_value=ListValue(values=(lhs + rhs))) candidate_type = fields[len(lhs) - 1].type_ first = rhs.pop(0) - if first is None or candidate_type.code in _UNMERGEABLE_TYPES: + if first.HasField("null_value") or candidate_type.code in _UNMERGEABLE_TYPES: lhs.append(first) else: last = lhs.pop() @@ -282,8 +292,7 @@ def _merge_struct(lhs, rhs, type_): lhs.append(first) else: lhs.append(merged) - lhs.extend(rhs) - return lhs + return Value(list_value=ListValue(values=lhs + rhs)) _MERGE_BY_TYPE = { diff --git a/tests/unit/test__helpers.py b/tests/unit/test__helpers.py index d554f3f717..fecf2581de 100644 --- a/tests/unit/test__helpers.py +++ b/tests/unit/test__helpers.py @@ -146,13 +146,6 @@ def test_w_float(self): self.assertIsInstance(value_pb, Value) self.assertEqual(value_pb.number_value, 3.14159) - def test_w_float_str(self): - from google.protobuf.struct_pb2 import Value - - value_pb = self._callFUT(3.14159) - self.assertIsInstance(value_pb, Value) - self.assertEqual(value_pb.number_value, 3.14159) - def test_w_float_nan(self): from google.protobuf.struct_pb2 import Value @@ -309,174 +302,6 @@ def test_w_multiple_values(self): self.assertEqual(found.values[1].string_value, expected[1]) -class Test_parse_value(unittest.TestCase): - def _callFUT(self, *args, **kw): - from google.cloud.spanner_v1._helpers import _parse_value - - return _parse_value(*args, **kw) - - def test_w_null(self): - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.STRING) - value = expected_value = None - - self.assertEqual(self._callFUT(value, field_type), expected_value) - - def test_w_string(self): - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.STRING) - value = expected_value = u"Value" - - self.assertEqual(self._callFUT(value, field_type), expected_value) - - def test_w_bytes(self): - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.BYTES) - value = "Value" - expected_value = b"Value" - - self.assertEqual(self._callFUT(value, field_type), expected_value) - - def test_w_bool(self): - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.BOOL) - value = expected_value = True - - self.assertEqual(self._callFUT(value, field_type), expected_value) - - def test_w_int(self): - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.INT64) - value = "12345" - expected_value = 12345 - - self.assertEqual(self._callFUT(value, field_type), expected_value) - - def test_w_float(self): - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.FLOAT64) - value = "3.14159" - expected_value = 3.14159 - - self.assertEqual(self._callFUT(value, field_type), expected_value) - - def test_w_date(self): - import datetime - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - value = "2020-09-22" - expected_value = datetime.date(2020, 9, 22) - field_type = Type(code=TypeCode.DATE) - - self.assertEqual(self._callFUT(value, field_type), expected_value) - - def test_w_timestamp_wo_nanos(self): - import pytz - from google.api_core import datetime_helpers - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.TIMESTAMP) - value = "2016-12-20T21:13:47.123456Z" - expected_value = datetime_helpers.DatetimeWithNanoseconds( - 2016, 12, 20, 21, 13, 47, microsecond=123456, tzinfo=pytz.UTC - ) - - parsed = self._callFUT(value, field_type) - self.assertIsInstance(parsed, datetime_helpers.DatetimeWithNanoseconds) - self.assertEqual(parsed, expected_value) - - def test_w_timestamp_w_nanos(self): - import pytz - from google.api_core import datetime_helpers - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.TIMESTAMP) - value = "2016-12-20T21:13:47.123456789Z" - expected_value = datetime_helpers.DatetimeWithNanoseconds( - 2016, 12, 20, 21, 13, 47, nanosecond=123456789, tzinfo=pytz.UTC - ) - - parsed = self._callFUT(value, field_type) - self.assertIsInstance(parsed, datetime_helpers.DatetimeWithNanoseconds) - self.assertEqual(parsed, expected_value) - - def test_w_array_empty(self): - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type( - code=TypeCode.ARRAY, array_element_type=Type(code=TypeCode.INT64) - ) - value = [] - - self.assertEqual(self._callFUT(value, field_type), []) - - def test_w_array_non_empty(self): - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type( - code=TypeCode.ARRAY, array_element_type=Type(code=TypeCode.INT64) - ) - values = ["32", "19", "5"] - expected_values = [32, 19, 5] - - self.assertEqual(self._callFUT(values, field_type), expected_values) - - def test_w_struct(self): - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import StructType - from google.cloud.spanner_v1 import TypeCode - - struct_type_pb = StructType( - fields=[ - StructType.Field(name="name", type_=Type(code=TypeCode.STRING)), - StructType.Field(name="age", type_=Type(code=TypeCode.INT64)), - ] - ) - field_type = Type(code=TypeCode.STRUCT, struct_type=struct_type_pb) - values = [u"phred", "32"] - expected_values = [u"phred", 32] - - self.assertEqual(self._callFUT(values, field_type), expected_values) - - def test_w_numeric(self): - import decimal - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.NUMERIC) - expected_value = decimal.Decimal("99999999999999999999999999999.999999999") - value = "99999999999999999999999999999.999999999" - - self.assertEqual(self._callFUT(value, field_type), expected_value) - - def test_w_unknown_type(self): - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.TYPE_CODE_UNSPECIFIED) - value_pb = object() - - with self.assertRaises(ValueError): - self._callFUT(value_pb, field_type) - - class Test_parse_value_pb(unittest.TestCase): def _callFUT(self, *args, **kw): from google.cloud.spanner_v1._helpers import _parse_value_pb @@ -676,17 +501,6 @@ def test_w_unknown_type(self): with self.assertRaises(ValueError): self._callFUT(value_pb, field_type) - def test_w_empty_value(self): - from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1 import Type - from google.cloud.spanner_v1 import TypeCode - - field_type = Type(code=TypeCode.STRING) - value_pb = Value() - - with self.assertRaises(ValueError): - self._callFUT(value_pb, field_type) - class Test_parse_list_value_pbs(unittest.TestCase): def _callFUT(self, *args, **kw): diff --git a/tests/unit/test_snapshot.py b/tests/unit/test_snapshot.py index 5250e41c95..2305937204 100644 --- a/tests/unit/test_snapshot.py +++ b/tests/unit/test_snapshot.py @@ -393,6 +393,7 @@ def _read_helper(self, multi_use, first=True, count=0, partition=None): from google.cloud.spanner_v1._helpers import _make_value_pb VALUES = [[u"bharney", 31], [u"phred", 32]] + VALUE_PBS = [[_make_value_pb(item) for item in row] for row in VALUES] struct_type_pb = StructType( fields=[ StructType.Field(name="name", type_=Type(code=TypeCode.STRING)), @@ -408,7 +409,7 @@ def _read_helper(self, multi_use, first=True, count=0, partition=None): PartialResultSet(stats=stats_pb), ] for i in range(len(result_sets)): - result_sets[i].values.extend(VALUES[i]) + result_sets[i].values.extend(VALUE_PBS[i]) KEYS = [["bharney@example.com"], ["phred@example.com"]] keyset = KeySet(keys=KEYS) INDEX = "email-address-index" @@ -561,6 +562,7 @@ def _execute_sql_helper( ) VALUES = [[u"bharney", u"rhubbyl", 31], [u"phred", u"phlyntstone", 32]] + VALUE_PBS = [[_make_value_pb(item) for item in row] for row in VALUES] MODE = 2 # PROFILE struct_type_pb = StructType( fields=[ @@ -578,7 +580,7 @@ def _execute_sql_helper( PartialResultSet(stats=stats_pb), ] for i in range(len(result_sets)): - result_sets[i].values.extend(VALUES[i]) + result_sets[i].values.extend(VALUE_PBS[i]) iterator = _MockIterator(*result_sets) database = _Database() api = database.spanner_api = self._make_spanner_api() diff --git a/tests/unit/test_streamed.py b/tests/unit/test_streamed.py index 4a31c5d179..63f3bf81fe 100644 --- a/tests/unit/test_streamed.py +++ b/tests/unit/test_streamed.py @@ -89,6 +89,16 @@ def _make_value(value): return _make_value_pb(value) + @staticmethod + def _make_list_value(values=(), value_pbs=None): + from google.protobuf.struct_pb2 import ListValue + from google.protobuf.struct_pb2 import Value + from google.cloud.spanner_v1._helpers import _make_list_value_pb + + if value_pbs is not None: + return Value(list_value=ListValue(values=value_pbs)) + return Value(list_value=_make_list_value_pb(values)) + @staticmethod def _make_result_set_metadata(fields=(), transaction_id=None): from google.cloud.spanner_v1 import ResultSetMetadata @@ -161,26 +171,25 @@ def test__merge_chunk_int64(self): streamed = self._make_one(iterator) FIELDS = [self._make_scalar_field("age", TypeCode.INT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = 42 - chunk = 13 + streamed._pending_chunk = self._make_value(42) + chunk = self._make_value(13) merged = streamed._merge_chunk(chunk) - self.assertEqual(merged, 4213) + self.assertEqual(merged.string_value, "4213") self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_float64_nan_string(self): from google.cloud.spanner_v1 import TypeCode - from math import isnan iterator = _MockCancellableIterator() streamed = self._make_one(iterator) FIELDS = [self._make_scalar_field("weight", TypeCode.FLOAT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = u"Na" - chunk = u"N" + streamed._pending_chunk = self._make_value(u"Na") + chunk = self._make_value(u"N") merged = streamed._merge_chunk(chunk) - self.assertTrue(isnan(merged)) + self.assertEqual(merged.string_value, u"NaN") def test__merge_chunk_float64_w_empty(self): from google.cloud.spanner_v1 import TypeCode @@ -189,11 +198,11 @@ def test__merge_chunk_float64_w_empty(self): streamed = self._make_one(iterator) FIELDS = [self._make_scalar_field("weight", TypeCode.FLOAT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = 3.14159 - chunk = "" + streamed._pending_chunk = self._make_value(3.14159) + chunk = self._make_value("") merged = streamed._merge_chunk(chunk) - self.assertEqual(merged, 3.14159) + self.assertEqual(merged.number_value, 3.14159) def test__merge_chunk_float64_w_float64(self): from google.cloud.spanner_v1.streamed import Unmergeable @@ -203,8 +212,8 @@ def test__merge_chunk_float64_w_float64(self): streamed = self._make_one(iterator) FIELDS = [self._make_scalar_field("weight", TypeCode.FLOAT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = 3.14159 - chunk = 2.71828 + streamed._pending_chunk = self._make_value(3.14159) + chunk = self._make_value(2.71828) with self.assertRaises(Unmergeable): streamed._merge_chunk(chunk) @@ -216,12 +225,12 @@ def test__merge_chunk_string(self): streamed = self._make_one(iterator) FIELDS = [self._make_scalar_field("name", TypeCode.STRING)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = u"phred" - chunk = u"wylma" + streamed._pending_chunk = self._make_value(u"phred") + chunk = self._make_value(u"wylma") merged = streamed._merge_chunk(chunk) - self.assertEqual(merged, u"phredwylma") + self.assertEqual(merged.string_value, u"phredwylma") self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_string_w_bytes(self): @@ -231,11 +240,11 @@ def test__merge_chunk_string_w_bytes(self): streamed = self._make_one(iterator) FIELDS = [self._make_scalar_field("image", TypeCode.BYTES)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = ( + streamed._pending_chunk = self._make_value( u"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAAAAAA" u"6fptVAAAACXBIWXMAAAsTAAALEwEAmpwYAAAA\n" ) - chunk = ( + chunk = self._make_value( u"B3RJTUUH4QQGFwsBTL3HMwAAABJpVFh0Q29tbWVudAAAAAAAU0FNUExF" u"MG3E+AAAAApJREFUCNdj\nYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\n" ) @@ -243,10 +252,10 @@ def test__merge_chunk_string_w_bytes(self): merged = streamed._merge_chunk(chunk) self.assertEqual( - merged, - b"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAAAAAA6fptVAAAACXBIWXMAAAsTAAAL" - b"EwEAmpwYAAAA\nB3RJTUUH4QQGFwsBTL3HMwAAABJpVFh0Q29tbWVudAAAAAAAU0" - b"FNUExFMG3E+AAAAApJREFUCNdj\nYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\n", + merged.string_value, + u"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAAAAAA6fptVAAAACXBIWXMAAAsTAAAL" + u"EwEAmpwYAAAA\nB3RJTUUH4QQGFwsBTL3HMwAAABJpVFh0Q29tbWVudAAAAAAAU0" + u"FNUExFMG3E+AAAAApJREFUCNdj\nYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\n", ) self.assertIsNone(streamed._pending_chunk) @@ -257,12 +266,12 @@ def test__merge_chunk_array_of_bool(self): streamed = self._make_one(iterator) FIELDS = [self._make_array_field("name", element_type_code=TypeCode.BOOL)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = [True, True] - chunk = [False, False, False] + streamed._pending_chunk = self._make_list_value([True, True]) + chunk = self._make_list_value([False, False, False]) merged = streamed._merge_chunk(chunk) - expected = [True, True, False, False, False] + expected = self._make_list_value([True, True, False, False, False]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -273,12 +282,12 @@ def test__merge_chunk_array_of_int(self): streamed = self._make_one(iterator) FIELDS = [self._make_array_field("name", element_type_code=TypeCode.INT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = [0, 1, 2] - chunk = [3, 4, 5] + streamed._pending_chunk = self._make_list_value([0, 1, 2]) + chunk = self._make_list_value([3, 4, 5]) merged = streamed._merge_chunk(chunk) - expected = [0, 1, 23, 4, 5] + expected = self._make_list_value([0, 1, 23, 4, 5]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -294,12 +303,12 @@ def test__merge_chunk_array_of_float(self): streamed = self._make_one(iterator) FIELDS = [self._make_array_field("name", element_type_code=TypeCode.FLOAT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = [PI, SQRT_2] - chunk = ["", EULER, LOG_10] + streamed._pending_chunk = self._make_list_value([PI, SQRT_2]) + chunk = self._make_list_value(["", EULER, LOG_10]) merged = streamed._merge_chunk(chunk) - expected = [PI, SQRT_2, EULER, LOG_10] + expected = self._make_list_value([PI, SQRT_2, EULER, LOG_10]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -310,12 +319,12 @@ def test__merge_chunk_array_of_string_with_empty(self): streamed = self._make_one(iterator) FIELDS = [self._make_array_field("name", element_type_code=TypeCode.STRING)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = [u"A", u"B", u"C"] - chunk = [] + streamed._pending_chunk = self._make_list_value([u"A", u"B", u"C"]) + chunk = self._make_list_value([]) merged = streamed._merge_chunk(chunk) - expected = [u"A", u"B", u"C"] + expected = self._make_list_value([u"A", u"B", u"C"]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -326,12 +335,12 @@ def test__merge_chunk_array_of_string(self): streamed = self._make_one(iterator) FIELDS = [self._make_array_field("name", element_type_code=TypeCode.STRING)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = [u"A", u"B", u"C"] - chunk = [None, u"D", u"E"] + streamed._pending_chunk = self._make_list_value([u"A", u"B", u"C"]) + chunk = self._make_list_value([None, u"D", u"E"]) merged = streamed._merge_chunk(chunk) - expected = [u"A", u"B", u"C", None, u"D", u"E"] + expected = self._make_list_value([u"A", u"B", u"C", None, u"D", u"E"]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -342,12 +351,12 @@ def test__merge_chunk_array_of_string_with_null(self): streamed = self._make_one(iterator) FIELDS = [self._make_array_field("name", element_type_code=TypeCode.STRING)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = [u"A", u"B", u"C"] - chunk = [u"D", u"E"] + streamed._pending_chunk = self._make_list_value([u"A", u"B", u"C"]) + chunk = self._make_list_value([u"D", u"E"]) merged = streamed._merge_chunk(chunk) - expected = [u"A", u"B", u"CD", u"E"] + expected = self._make_list_value([u"A", u"B", u"CD", u"E"]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -364,17 +373,22 @@ def test__merge_chunk_array_of_array_of_int(self): streamed = self._make_one(iterator) FIELDS = [StructType.Field(name="loloi", type_=array_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = [[0, 1], [2]] - chunk = [[3], [4, 5]] + streamed._pending_chunk = self._make_list_value( + value_pbs=[self._make_list_value([0, 1]), self._make_list_value([2])] + ) + chunk = self._make_list_value( + value_pbs=[self._make_list_value([3]), self._make_list_value([4, 5])] + ) merged = streamed._merge_chunk(chunk) - expected = [ - [0, 1], - [23], - [4, 5], - ] - + expected = self._make_list_value( + value_pbs=[ + self._make_list_value([0, 1]), + self._make_list_value([23]), + self._make_list_value([4, 5]), + ] + ) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -391,23 +405,28 @@ def test__merge_chunk_array_of_array_of_string(self): streamed = self._make_one(iterator) FIELDS = [StructType.Field(name="lolos", type_=array_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = [ - [u"A", u"B"], - [u"C"], - ] - chunk = [ - [u"D"], - [u"E", u"F"], - ] + streamed._pending_chunk = self._make_list_value( + value_pbs=[ + self._make_list_value([u"A", u"B"]), + self._make_list_value([u"C"]), + ] + ) + chunk = self._make_list_value( + value_pbs=[ + self._make_list_value([u"D"]), + self._make_list_value([u"E", u"F"]), + ] + ) merged = streamed._merge_chunk(chunk) - expected = [ - [u"A", u"B"], - [u"CD"], - [u"E", u"F"], - ] - + expected = self._make_list_value( + value_pbs=[ + self._make_list_value([u"A", u"B"]), + self._make_list_value([u"CD"]), + self._make_list_value([u"E", u"F"]), + ] + ) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -421,15 +440,15 @@ def test__merge_chunk_array_of_struct(self): ) FIELDS = [self._make_array_field("test", element_type=struct_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) - partial = [u"Phred "] - streamed._pending_chunk = [partial] - rest = [u"Phlyntstone", 31] - chunk = [rest] + partial = self._make_list_value([u"Phred "]) + streamed._pending_chunk = self._make_list_value(value_pbs=[partial]) + rest = self._make_list_value([u"Phlyntstone", 31]) + chunk = self._make_list_value(value_pbs=[rest]) merged = streamed._merge_chunk(chunk) - struct = [u"Phred Phlyntstone", 31] - expected = [struct] + struct = self._make_list_value([u"Phred Phlyntstone", 31]) + expected = self._make_list_value(value_pbs=[struct]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -443,14 +462,14 @@ def test__merge_chunk_array_of_struct_with_empty(self): ) FIELDS = [self._make_array_field("test", element_type=struct_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) - partial = [u"Phred "] - streamed._pending_chunk = [partial] - rest = [] - chunk = [rest] + partial = self._make_list_value([u"Phred "]) + streamed._pending_chunk = self._make_list_value(value_pbs=[partial]) + rest = self._make_list_value([]) + chunk = self._make_list_value(value_pbs=[rest]) merged = streamed._merge_chunk(chunk) - expected = [partial] + expected = self._make_list_value(value_pbs=[partial]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -468,15 +487,15 @@ def test__merge_chunk_array_of_struct_unmergeable(self): ) FIELDS = [self._make_array_field("test", element_type=struct_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) - partial = [u"Phred Phlyntstone", True] - streamed._pending_chunk = [partial] - rest = [True] - chunk = [rest] + partial = self._make_list_value([u"Phred Phlyntstone", True]) + streamed._pending_chunk = self._make_list_value(value_pbs=[partial]) + rest = self._make_list_value([True]) + chunk = self._make_list_value(value_pbs=[rest]) merged = streamed._merge_chunk(chunk) - struct = [u"Phred Phlyntstone", True, True] - expected = [struct] + struct = self._make_list_value([u"Phred Phlyntstone", True, True]) + expected = self._make_list_value(value_pbs=[struct]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -488,15 +507,15 @@ def test__merge_chunk_array_of_struct_unmergeable_split(self): ) FIELDS = [self._make_array_field("test", element_type=struct_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) - partial = [u"Phred Phlyntstone", 1.65] - streamed._pending_chunk = [partial] - rest = ["brown"] - chunk = [rest] + partial = self._make_list_value([u"Phred Phlyntstone", 1.65]) + streamed._pending_chunk = self._make_list_value(value_pbs=[partial]) + rest = self._make_list_value(["brown"]) + chunk = self._make_list_value(value_pbs=[rest]) merged = streamed._merge_chunk(chunk) - struct = [u"Phred Phlyntstone", 1.65, "brown"] - expected = [struct] + struct = self._make_list_value([u"Phred Phlyntstone", 1.65, "brown"]) + expected = self._make_list_value(value_pbs=[struct]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) @@ -527,8 +546,8 @@ def test_merge_values_empty_and_partial(self): self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) - VALUES = [u"Phred Phlyntstone", "42"] BARE = [u"Phred Phlyntstone", 42] + VALUES = [self._make_value(bare) for bare in BARE] streamed._current_row = [] streamed._merge_values(VALUES) self.assertEqual(list(streamed), []) @@ -545,8 +564,8 @@ def test_merge_values_empty_and_filled(self): self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) - VALUES = [u"Phred Phlyntstone", "42", True] BARE = [u"Phred Phlyntstone", 42, True] + VALUES = [self._make_value(bare) for bare in BARE] streamed._current_row = [] streamed._merge_values(VALUES) self.assertEqual(list(streamed), [BARE]) @@ -563,15 +582,6 @@ def test_merge_values_empty_and_filled_plus(self): self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) - VALUES = [ - u"Phred Phlyntstone", - "42", - True, - u"Bharney Rhubble", - "39", - True, - u"Wylma Phlyntstone", - ] BARE = [ u"Phred Phlyntstone", 42, @@ -581,6 +591,7 @@ def test_merge_values_empty_and_filled_plus(self): True, u"Wylma Phlyntstone", ] + VALUES = [self._make_value(bare) for bare in BARE] streamed._current_row = [] streamed._merge_values(VALUES) self.assertEqual(list(streamed), [BARE[0:3], BARE[3:6]]) @@ -616,8 +627,8 @@ def test_merge_values_partial_and_partial(self): streamed._metadata = self._make_result_set_metadata(FIELDS) BEFORE = [u"Phred Phlyntstone"] streamed._current_row[:] = BEFORE - TO_MERGE = ["42"] MERGED = [42] + TO_MERGE = [self._make_value(item) for item in MERGED] streamed._merge_values(TO_MERGE) self.assertEqual(list(streamed), []) self.assertEqual(streamed._current_row, BEFORE + MERGED) @@ -635,8 +646,8 @@ def test_merge_values_partial_and_filled(self): streamed._metadata = self._make_result_set_metadata(FIELDS) BEFORE = [u"Phred Phlyntstone"] streamed._current_row[:] = BEFORE - TO_MERGE = ["42", True] MERGED = [42, True] + TO_MERGE = [self._make_value(item) for item in MERGED] streamed._merge_values(TO_MERGE) self.assertEqual(list(streamed), [BEFORE + MERGED]) self.assertEqual(streamed._current_row, []) @@ -654,8 +665,8 @@ def test_merge_values_partial_and_filled_plus(self): streamed._metadata = self._make_result_set_metadata(FIELDS) BEFORE = [self._make_value(u"Phred Phlyntstone")] streamed._current_row[:] = BEFORE - TO_MERGE = ["42", True, u"Bharney Rhubble", "39", True, u"Wylma Phlyntstone"] MERGED = [42, True, u"Bharney Rhubble", 39, True, u"Wylma Phlyntstone"] + TO_MERGE = [self._make_value(item) for item in MERGED] VALUES = BEFORE + MERGED streamed._merge_values(TO_MERGE) self.assertEqual(list(streamed), [VALUES[0:3], VALUES[3:6]]) @@ -720,7 +731,8 @@ def test_consume_next_first_set_partial(self): ] metadata = self._make_result_set_metadata(FIELDS, transaction_id=TXN_ID) BARE = [u"Phred Phlyntstone", 42] - result_set = self._make_partial_result_set(BARE, metadata=metadata) + VALUES = [self._make_value(bare) for bare in BARE] + result_set = self._make_partial_result_set(VALUES, metadata=metadata) iterator = _MockCancellableIterator(result_set) source = mock.Mock(_transaction_id=None, spec=["_transaction_id"]) streamed = self._make_one(iterator, source=source) @@ -768,7 +780,7 @@ def test_consume_next_w_partial_result(self): streamed._consume_next() self.assertEqual(list(streamed), []) self.assertEqual(streamed._current_row, []) - self.assertEqual(streamed._pending_chunk, VALUES[0].string_value) + self.assertEqual(streamed._pending_chunk, VALUES[0]) def test_consume_next_w_pending_chunk(self): from google.cloud.spanner_v1 import TypeCode @@ -792,7 +804,7 @@ def test_consume_next_w_pending_chunk(self): iterator = _MockCancellableIterator(result_set) streamed = self._make_one(iterator) streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = u"Phred " + streamed._pending_chunk = self._make_value(u"Phred ") streamed._consume_next() self.assertEqual( list(streamed), From 2bdd21c405d3d1a3bac989102f105c1123a56d3c Mon Sep 17 00:00:00 2001 From: "release-please[bot]" <55107282+release-please[bot]@users.noreply.github.com> Date: Tue, 23 Feb 2021 19:16:13 +1100 Subject: [PATCH 16/16] chore: release 3.1.0 (#237) Co-authored-by: release-please[bot] <55107282+release-please[bot]@users.noreply.github.com> --- CHANGELOG.md | 21 +++++++++++++++++++++ setup.py | 2 +- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 0d8f77c32b..5d1c812156 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,6 +4,27 @@ [1]: https://pypi.org/project/google-cloud-spanner/#history +## [3.1.0](https://www.github.com/googleapis/python-spanner/compare/v3.0.0...v3.1.0) (2021-02-23) + + +### Features + +* add support for Point In Time Recovery (PITR) ([#148](https://www.github.com/googleapis/python-spanner/issues/148)) ([a082e5d](https://www.github.com/googleapis/python-spanner/commit/a082e5d7d2195ab9429a8e0bef4a664b59fdf771)) +* add support to log commit stats ([#205](https://www.github.com/googleapis/python-spanner/issues/205)) ([434967e](https://www.github.com/googleapis/python-spanner/commit/434967e3a433b6516f5792dcbfef7ba950f091c5)) + + +### Bug Fixes + +* connection attribute of connection class and include related unit tests ([#228](https://www.github.com/googleapis/python-spanner/issues/228)) ([4afea77](https://www.github.com/googleapis/python-spanner/commit/4afea77812e021859377216cd950e1d9fc965ba8)) +* **db_api:** add dummy lastrowid attribute ([#227](https://www.github.com/googleapis/python-spanner/issues/227)) ([0375914](https://www.github.com/googleapis/python-spanner/commit/0375914342de98e3903bae2097142325028d18d9)) +* fix execute insert for homogeneous statement ([#233](https://www.github.com/googleapis/python-spanner/issues/233)) ([36b12a7](https://www.github.com/googleapis/python-spanner/commit/36b12a7b53cdbedf543d2b3bb132fb9e13cefb65)) +* use datetime timezone info when generating timestamp strings ([#236](https://www.github.com/googleapis/python-spanner/issues/236)) ([539f145](https://www.github.com/googleapis/python-spanner/commit/539f14533afd348a328716aa511d453ca3bb19f5)) + + +### Performance Improvements + +* improve streaming performance ([#240](https://www.github.com/googleapis/python-spanner/issues/240)) ([3e35d4a](https://www.github.com/googleapis/python-spanner/commit/3e35d4a0217081bcab4ee31b642cd3bff5e6f4b5)) + ## [3.0.0](https://www.github.com/googleapis/python-spanner/compare/v2.1.0...v3.0.0) (2021-01-15) diff --git a/setup.py b/setup.py index 28f21ad515..27169b888e 100644 --- a/setup.py +++ b/setup.py @@ -22,7 +22,7 @@ name = "google-cloud-spanner" description = "Cloud Spanner API client library" -version = "3.0.0" +version = "3.1.0" # Should be one of: # 'Development Status :: 3 - Alpha' # 'Development Status :: 4 - Beta'