From a4964d5f17a510d3f8fa34e5e4406184b958d00a Mon Sep 17 00:00:00 2001
From: yoshi-code-bot <70984784+yoshi-code-bot@users.noreply.github.com>
Date: Tue, 20 Aug 2024 00:32:22 -0700
Subject: [PATCH] chore: Update discovery artifacts (#2466)
## Deleted keys were detected in the following stable discovery artifacts:
alloydb v1 https://togithub.com/googleapis/google-api-python-client/commit/bcfe7d1b4490955a16cd0452ec52abba38374c3c
backupdr v1 https://togithub.com/googleapis/google-api-python-client/commit/e95ba9b0696cb49e9e327dec818c9ebd110cd1af
bigquery v2 https://togithub.com/googleapis/google-api-python-client/commit/fddb5fd1e36a4e4af3df63c7fa97f74e2895edc1
compute v1 https://togithub.com/googleapis/google-api-python-client/commit/0da5dd30efcc6c86cc7349ba2201d3a7fe16b8c3
dialogflow v2 https://togithub.com/googleapis/google-api-python-client/commit/227faa6b5d227ed0287e929f647ced12a0b1e7e2
dialogflow v3 https://togithub.com/googleapis/google-api-python-client/commit/227faa6b5d227ed0287e929f647ced12a0b1e7e2
integrations v1 https://togithub.com/googleapis/google-api-python-client/commit/097eea11af945221753c321798982ce08ae67c37
redis v1 https://togithub.com/googleapis/google-api-python-client/commit/9f48ea3e73f1258526996d01a1cc06d4f53a3e07
vmmigration v1 https://togithub.com/googleapis/google-api-python-client/commit/725dc897f8505f9d8194a60cf727d68269c5b033
## Deleted keys were detected in the following pre-stable discovery artifacts:
alloydb v1alpha https://togithub.com/googleapis/google-api-python-client/commit/bcfe7d1b4490955a16cd0452ec52abba38374c3c
alloydb v1beta https://togithub.com/googleapis/google-api-python-client/commit/bcfe7d1b4490955a16cd0452ec52abba38374c3c
compute alpha https://togithub.com/googleapis/google-api-python-client/commit/0da5dd30efcc6c86cc7349ba2201d3a7fe16b8c3
compute beta https://togithub.com/googleapis/google-api-python-client/commit/0da5dd30efcc6c86cc7349ba2201d3a7fe16b8c3
dialogflow v2beta1 https://togithub.com/googleapis/google-api-python-client/commit/227faa6b5d227ed0287e929f647ced12a0b1e7e2
dialogflow v3beta1 https://togithub.com/googleapis/google-api-python-client/commit/227faa6b5d227ed0287e929f647ced12a0b1e7e2
redis v1beta1 https://togithub.com/googleapis/google-api-python-client/commit/9f48ea3e73f1258526996d01a1cc06d4f53a3e07
vmmigration v1alpha1 https://togithub.com/googleapis/google-api-python-client/commit/725dc897f8505f9d8194a60cf727d68269c5b033
## Discovery Artifact Change Summary:
feat(aiplatform): update the api https://togithub.com/googleapis/google-api-python-client/commit/016286b9fc61eb568bf0d30877eeb7c5738765f5
feat(alloydb): update the api https://togithub.com/googleapis/google-api-python-client/commit/bcfe7d1b4490955a16cd0452ec52abba38374c3c
feat(androidpublisher): update the api https://togithub.com/googleapis/google-api-python-client/commit/b482cb902f4555bb8cbf0932903b275c3defa899
feat(backupdr): update the api https://togithub.com/googleapis/google-api-python-client/commit/e95ba9b0696cb49e9e327dec818c9ebd110cd1af
feat(beyondcorp): update the api https://togithub.com/googleapis/google-api-python-client/commit/6c80694b0325c64c792c18227663c700403e7b94
feat(bigquery): update the api https://togithub.com/googleapis/google-api-python-client/commit/fddb5fd1e36a4e4af3df63c7fa97f74e2895edc1
feat(bigtableadmin): update the api https://togithub.com/googleapis/google-api-python-client/commit/4a75f7e8be07d57a44a71a3a9cc7d503c026dff3
feat(compute): update the api https://togithub.com/googleapis/google-api-python-client/commit/0da5dd30efcc6c86cc7349ba2201d3a7fe16b8c3
feat(connectors): update the api https://togithub.com/googleapis/google-api-python-client/commit/9ef64b020a712951d07f0df37b3003baaceac056
feat(container): update the api https://togithub.com/googleapis/google-api-python-client/commit/bbdc26ae8e9337c0032bf333b92df231bb0a0c45
feat(dataflow): update the api https://togithub.com/googleapis/google-api-python-client/commit/955a2eb6cadadc8a5469c68a2b96d708a5753710
feat(datamigration): update the api https://togithub.com/googleapis/google-api-python-client/commit/3ba86616ecfe16e0bf2b6d031fa01045f39ea16c
feat(dialogflow): update the api https://togithub.com/googleapis/google-api-python-client/commit/227faa6b5d227ed0287e929f647ced12a0b1e7e2
feat(discoveryengine): update the api https://togithub.com/googleapis/google-api-python-client/commit/a8a82673f53e367ef19f3b3b6b92460dfa55cb8b
feat(displayvideo): update the api https://togithub.com/googleapis/google-api-python-client/commit/ed3825f216f9615b838b8c0f709fd144d6384238
feat(dlp): update the api https://togithub.com/googleapis/google-api-python-client/commit/df3c8e91103634b28cd6fd5da1a42088ba654fcc
feat(documentai): update the api https://togithub.com/googleapis/google-api-python-client/commit/b3a0025c19fb409dea97fda3cef64c913c096072
feat(gkehub): update the api https://togithub.com/googleapis/google-api-python-client/commit/d527515c96d7861d71d506a97c06aa73137639b0
feat(healthcare): update the api https://togithub.com/googleapis/google-api-python-client/commit/0391573439a1cb8f04e67f1fd5fc3074abd9eb36
feat(integrations): update the api https://togithub.com/googleapis/google-api-python-client/commit/097eea11af945221753c321798982ce08ae67c37
feat(manufacturers): update the api https://togithub.com/googleapis/google-api-python-client/commit/043bd6b0b4ce2fbed2b49dc526427d9ecc67078a
feat(migrationcenter): update the api https://togithub.com/googleapis/google-api-python-client/commit/e4940f14ed1c7caf4cbaefd747cd6b40d7b433e0
feat(networkconnectivity): update the api https://togithub.com/googleapis/google-api-python-client/commit/6aca7bb3d0f87ce0dc70ba359d847cbf963081f2
feat(networkmanagement): update the api https://togithub.com/googleapis/google-api-python-client/commit/997ed8ac23aad4b1fc9fdb9fe2f2cb4ae5916e36
feat(recaptchaenterprise): update the api https://togithub.com/googleapis/google-api-python-client/commit/fc61f5f9b5891c9c9f4635af2d06507a0f2fe2b5
feat(redis): update the api https://togithub.com/googleapis/google-api-python-client/commit/9f48ea3e73f1258526996d01a1cc06d4f53a3e07
feat(run): update the api https://togithub.com/googleapis/google-api-python-client/commit/7d912586990c833dc5f02bc27a74652233c160a2
feat(searchads360): update the api https://togithub.com/googleapis/google-api-python-client/commit/75ebf3109dc315821297c64498488c43f445d097
feat(securitycenter): update the api https://togithub.com/googleapis/google-api-python-client/commit/b698e42e25c5567dab5fbb4d6c7a0082f7ee91ac
feat(serviceusage): update the api https://togithub.com/googleapis/google-api-python-client/commit/4f08fef40014d2cd9c7d7d9dd8fc0316c9662fe9
feat(spanner): update the api https://togithub.com/googleapis/google-api-python-client/commit/4ebb5dad76ab8aab198ddcc1ff80e31d2d4f5683
feat(vmmigration): update the api https://togithub.com/googleapis/google-api-python-client/commit/725dc897f8505f9d8194a60cf727d68269c5b033
feat(youtube): update the api https://togithub.com/googleapis/google-api-python-client/commit/a25f063020b73c23e7820103b02ff9261981b330
---
...r_v1.accessPolicies.servicePerimeters.html | 40 +-
...tform_v1.projects.locations.endpoints.html | 8 +-
...m_v1.projects.locations.featureGroups.html | 12 +
...eStores.featureViews.featureViewSyncs.html | 4 +
...ions.featureOnlineStores.featureViews.html | 8 +
...rojects.locations.featureOnlineStores.html | 8 +
...s.locations.featurestores.entityTypes.html | 10 +
...m_v1.projects.locations.featurestores.html | 8 +
.../dyn/aiplatform_v1.projects.locations.html | 24 +
..._v1.projects.locations.indexEndpoints.html | 10 +
...latform_v1.projects.locations.indexes.html | 8 +
...cations.modelDeploymentMonitoringJobs.html | 10 +
....projects.locations.publishers.models.html | 8 +-
docs/dyn/aiplatform_v1.publishers.models.html | 5 +-
..._v1beta1.projects.locations.endpoints.html | 8 +-
...eta1.projects.locations.featureGroups.html | 12 +
...eStores.featureViews.featureViewSyncs.html | 4 +
...ions.featureOnlineStores.featureViews.html | 8 +
...rojects.locations.featureOnlineStores.html | 8 +
...s.locations.featurestores.entityTypes.html | 10 +
...eta1.projects.locations.featurestores.html | 8 +
...aiplatform_v1beta1.projects.locations.html | 24 +
...ta1.projects.locations.indexEndpoints.html | 10 +
...rm_v1beta1.projects.locations.indexes.html | 8 +
...cations.modelDeploymentMonitoringJobs.html | 10 +
....projects.locations.publishers.models.html | 8 +-
...rojects.locations.ragCorpora.ragFiles.html | 4 +
.../aiplatform_v1beta1.publishers.models.html | 5 +-
...lloydb_v1.projects.locations.clusters.html | 6 +
...projects.locations.clusters.instances.html | 3 +
...b_v1alpha.projects.locations.clusters.html | 84 +++
...projects.locations.clusters.instances.html | 8 +
...db_v1beta.projects.locations.clusters.html | 36 +
...projects.locations.clusters.instances.html | 8 +
...ublisher_v3.purchases.subscriptionsv2.html | 2 +
.../apikeys_v2.projects.locations.keys.html | 8 +-
...projects.locations.repositories.files.html | 2 +-
....locations.repositories.packages.tags.html | 2 +-
...projects.locations.repositories.files.html | 2 +-
....locations.repositories.packages.tags.html | 2 +-
...projects.locations.repositories.files.html | 2 +-
....locations.repositories.packages.tags.html | 2 +-
...ects.locations.backupPlanAssociations.html | 3 -
...pdr_v1.projects.locations.backupPlans.html | 6 -
...global_.securityGateways.applications.html | 241 ++++++
...ts.locations.global_.securityGateways.html | 265 +------
...cations.global_.securityGateways.hubs.html | 233 ++++++
...beyondcorp_v1alpha.projects.locations.html | 5 +
...cations.securityGateways.applications.html | 227 ++++++
...a.projects.locations.securityGateways.html | 10 +
...jects.locations.securityGateways.hubs.html | 219 ++++++
docs/dyn/bigquery_v2.jobs.html | 170 ++---
docs/dyn/bigquery_v2.tables.html | 140 ++--
...2.projects.instances.clusters.backups.html | 20 +-
docs/dyn/chat_v1.spaces.messages.html | 16 +-
docs/dyn/chat_v1.spaces.spaceEvents.html | 24 +-
...nagement_v1.customers.telemetry.users.html | 4 +-
...ions.deliveryPipelines.automationRuns.html | 8 +-
...cations.deliveryPipelines.automations.html | 16 +-
...oudtasks_v2.projects.locations.queues.html | 36 +-
...sks_v2beta2.projects.locations.queues.html | 36 +-
...sks_v2beta3.projects.locations.queues.html | 36 +-
docs/dyn/compute_alpha.backendServices.html | 14 +-
docs/dyn/compute_alpha.html | 5 -
.../compute_alpha.regionBackendServices.html | 12 +-
docs/dyn/compute_alpha.vpnTunnels.html | 32 +-
docs/dyn/compute_beta.backendServices.html | 14 +
docs/dyn/compute_beta.forwardingRules.html | 10 +
docs/dyn/compute_beta.futureReservations.html | 5 -
.../compute_beta.globalForwardingRules.html | 8 +
docs/dyn/compute_beta.machineTypes.html | 3 +
.../compute_beta.regionBackendServices.html | 12 +
docs/dyn/compute_beta.vpnTunnels.html | 32 +-
docs/dyn/compute_v1.futureReservations.html | 5 -
docs/dyn/compute_v1.machineTypes.html | 3 +
docs/dyn/compute_v1.vpnTunnels.html | 32 +-
...ors_v1.projects.locations.connections.html | 10 +
...cations.providers.connectors.versions.html | 8 +
...tainer_v1.projects.locations.clusters.html | 30 +
...projects.locations.clusters.nodePools.html | 12 +
.../container_v1.projects.zones.clusters.html | 30 +
..._v1.projects.zones.clusters.nodePools.html | 12 +
...r_v1beta1.projects.locations.clusters.html | 26 +-
...projects.locations.clusters.nodePools.html | 12 +
...ainer_v1beta1.projects.zones.clusters.html | 26 +-
...ta1.projects.zones.clusters.nodePools.html | 12 +
...dataflow_v1b3.projects.jobs.workItems.html | 2 +
...1b3.projects.locations.jobs.workItems.html | 2 +
...low_v1b3.projects.locations.templates.html | 12 +-
.../dyn/dataflow_v1b3.projects.templates.html | 12 +-
...projects.locations.connectionProfiles.html | 56 ++
...n_v1.projects.locations.migrationJobs.html | 5 +
...jects.locations.migrationJobs.objects.html | 258 +++++++
...v2.projects.conversations.suggestions.html | 2 +-
...s.locations.conversations.suggestions.html | 2 +-
...low_v2.projects.locations.suggestions.html | 2 +-
.../dialogflow_v2.projects.suggestions.html | 2 +-
...beta1.projects.conversations.messages.html | 6 +-
...1.projects.conversations.participants.html | 4 +-
...a1.projects.conversations.suggestions.html | 2 +-
...ects.locations.conversations.messages.html | 6 +-
....locations.conversations.participants.html | 4 +-
...s.locations.conversations.suggestions.html | 2 +-
...2beta1.projects.locations.suggestions.html | 4 +-
...alogflow_v2beta1.projects.suggestions.html | 4 +-
...ocations.agents.environments.sessions.html | 146 ++--
...ow_v3.projects.locations.agents.flows.html | 72 +-
...projects.locations.agents.flows.pages.html | 144 ++--
...ns.agents.flows.transitionRouteGroups.html | 24 +-
...v3.projects.locations.agents.sessions.html | 146 ++--
...3.projects.locations.agents.testCases.html | 400 +++++-----
...ts.locations.agents.testCases.results.html | 52 +-
...ocations.agents.transitionRouteGroups.html | 24 +-
...ojects.locations.agents.conversations.html | 246 +++---
...ocations.agents.environments.sessions.html | 182 +++--
...beta1.projects.locations.agents.flows.html | 72 +-
...projects.locations.agents.flows.pages.html | 144 ++--
...ns.agents.flows.transitionRouteGroups.html | 24 +-
...a1.projects.locations.agents.sessions.html | 182 +++--
...1.projects.locations.agents.testCases.html | 400 +++++-----
...ts.locations.agents.testCases.results.html | 52 +-
...beta1.projects.locations.agents.tools.html | 24 +
...ocations.agents.transitionRouteGroups.html | 24 +-
...ections.dataStores.branches.documents.html | 10 +-
...tions.collections.dataStores.branches.html | 31 +
...ions.dataStores.completionSuggestions.html | 2 +-
...tions.collections.dataStores.controls.html | 12 +-
...s.collections.dataStores.customModels.html | 120 +++
...ects.locations.collections.dataStores.html | 80 ++
...ons.collections.dataStores.userEvents.html | 2 +-
...ocations.collections.engines.controls.html | 12 +-
...cations.dataStores.branches.documents.html | 10 +-
...rojects.locations.dataStores.branches.html | 31 +
...ions.dataStores.completionSuggestions.html | 2 +-
...rojects.locations.dataStores.controls.html | 12 +-
...gine_v1.projects.locations.dataStores.html | 20 +
...jects.locations.dataStores.userEvents.html | 2 +-
...1.projects.locations.groundingConfigs.html | 1 +
...ections.dataStores.branches.documents.html | 10 +-
...tions.collections.dataStores.branches.html | 31 +
...ions.dataStores.completionSuggestions.html | 2 +-
...tions.collections.dataStores.controls.html | 12 +-
...ects.locations.collections.dataStores.html | 20 +
...collections.dataStores.servingConfigs.html | 1 +
...ons.collections.dataStores.userEvents.html | 2 +-
...ocations.collections.engines.controls.html | 12 +-
...ns.collections.engines.servingConfigs.html | 1 +
...cations.dataStores.branches.documents.html | 10 +-
...rojects.locations.dataStores.branches.html | 31 +
...ions.dataStores.completionSuggestions.html | 2 +-
...rojects.locations.dataStores.controls.html | 12 +-
...v1alpha.projects.locations.dataStores.html | 20 +
...s.locations.dataStores.servingConfigs.html | 1 +
...jects.locations.dataStores.userEvents.html | 2 +-
...a.projects.locations.groundingConfigs.html | 1 +
...veryengine_v1alpha.projects.locations.html | 11 +-
...cations.sampleQuerySets.sampleQueries.html | 2 +-
...v1alpha.projects.locations.userStores.html | 91 +++
...jects.locations.userStores.operations.html | 187 +++++
...ections.dataStores.branches.documents.html | 10 +-
...tions.collections.dataStores.branches.html | 31 +
...ions.dataStores.completionSuggestions.html | 2 +-
...tions.collections.dataStores.controls.html | 12 +-
...ects.locations.collections.dataStores.html | 20 +
...collections.dataStores.servingConfigs.html | 2 +
...ons.collections.dataStores.userEvents.html | 2 +-
...ocations.collections.engines.controls.html | 12 +-
...ns.collections.engines.servingConfigs.html | 2 +
...cations.dataStores.branches.documents.html | 10 +-
...rojects.locations.dataStores.branches.html | 31 +
...ions.dataStores.completionSuggestions.html | 2 +-
...rojects.locations.dataStores.controls.html | 12 +-
..._v1beta.projects.locations.dataStores.html | 20 +
...s.locations.dataStores.servingConfigs.html | 2 +
...jects.locations.dataStores.userEvents.html | 2 +-
...v1beta.projects.locations.evaluations.html | 3 +
...a.projects.locations.groundingConfigs.html | 1 +
...cations.sampleQuerySets.sampleQueries.html | 2 +-
...displayvideo_v2.advertisers.creatives.html | 12 +-
docs/dyn/dlp_v2.infoTypes.html | 2 +-
docs/dyn/dlp_v2.locations.infoTypes.html | 2 +-
..._v2.organizations.deidentifyTemplates.html | 40 +-
...dlp_v2.organizations.inspectTemplates.html | 4 +-
...2.organizations.locations.connections.html | 2 +-
...zations.locations.deidentifyTemplates.html | 40 +-
...anizations.locations.discoveryConfigs.html | 118 ++-
...lp_v2.organizations.locations.dlpJobs.html | 22 +-
...tions.locations.fileStoreDataProfiles.html | 78 +-
...anizations.locations.inspectTemplates.html | 4 +-
...2.organizations.locations.jobTriggers.html | 16 +-
...ganizations.locations.storedInfoTypes.html | 4 +-
...nizations.locations.tableDataProfiles.html | 70 ++
.../dlp_v2.organizations.storedInfoTypes.html | 4 +-
docs/dyn/dlp_v2.projects.content.html | 30 +-
.../dlp_v2.projects.deidentifyTemplates.html | 40 +-
docs/dyn/dlp_v2.projects.dlpJobs.html | 66 +-
docs/dyn/dlp_v2.projects.image.html | 2 +-
.../dyn/dlp_v2.projects.inspectTemplates.html | 4 +-
docs/dyn/dlp_v2.projects.jobTriggers.html | 36 +-
...dlp_v2.projects.locations.connections.html | 2 +-
.../dlp_v2.projects.locations.content.html | 30 +-
...rojects.locations.deidentifyTemplates.html | 40 +-
...2.projects.locations.discoveryConfigs.html | 118 ++-
.../dlp_v2.projects.locations.dlpJobs.html | 66 +-
...jects.locations.fileStoreDataProfiles.html | 78 +-
docs/dyn/dlp_v2.projects.locations.image.html | 2 +-
...2.projects.locations.inspectTemplates.html | 4 +-
...dlp_v2.projects.locations.jobTriggers.html | 36 +-
...v2.projects.locations.storedInfoTypes.html | 4 +-
....projects.locations.tableDataProfiles.html | 70 ++
docs/dyn/dlp_v2.projects.storedInfoTypes.html | 4 +-
...ntai_v1.projects.locations.processors.html | 1 +
...ocations.processors.processorVersions.html | 1 +
...v1beta3.projects.locations.processors.html | 1 +
...ocations.processors.processorVersions.html | 1 +
docs/dyn/drive_v2.changes.html | 12 +-
docs/dyn/drive_v2.files.html | 78 +-
docs/dyn/drive_v2.parents.html | 8 +-
docs/dyn/drive_v2.revisions.html | 12 +-
docs/dyn/drive_v3.changes.html | 4 +-
docs/dyn/drive_v3.files.html | 32 +-
docs/dyn/drive_v3.revisions.html | 8 +-
....projects.locations.publishers.models.html | 12 +-
...hub_v1.organizations.locations.fleets.html | 8 +
.../gkehub_v1.projects.locations.fleets.html | 32 +
...1alpha.organizations.locations.fleets.html | 8 +
...hub_v1alpha.projects.locations.fleets.html | 32 +
...v1beta.organizations.locations.fleets.html | 8 +
...ehub_v1beta.projects.locations.fleets.html | 32 +
docs/dyn/gkehub_v2.html | 111 +++
docs/dyn/gkehub_v2.projects.html | 91 +++
docs/dyn/gkehub_v2.projects.locations.html | 176 +++++
...ehub_v2.projects.locations.operations.html | 214 ++++++
docs/dyn/gkehub_v2beta.html | 111 +++
docs/dyn/gkehub_v2beta.projects.html | 91 +++
.../dyn/gkehub_v2beta.projects.locations.html | 176 +++++
..._v2beta.projects.locations.operations.html | 214 ++++++
...ts.locations.datasets.fhirStores.fhir.html | 32 +
docs/dyn/index.md | 6 +-
...ns_v1.projects.locations.certificates.html | 12 +-
...ons_v1.projects.locations.connections.html | 5 +
...cts.locations.integrations.executions.html | 47 +-
...ns_v1.projects.locations.integrations.html | 21 +-
...jects.locations.integrations.versions.html | 170 +----
...jects.locations.products.certificates.html | 12 +-
...ions.products.integrations.executions.html | 42 +-
...jects.locations.products.integrations.html | 21 +-
...ations.products.integrations.versions.html | 170 +----
...tions_v1.projects.locations.templates.html | 428 ++---------
...ounts.languages.productCertifications.html | 36 +
.../manufacturers_v1.accounts.products.html | 29 +
...er_v1alpha1.projects.locations.assets.html | 84 +++
...ha1.projects.locations.preferenceSets.html | 44 ++
...jects.locations.reportConfigs.reports.html | 96 +++
...rojects.locations.sources.errorFrames.html | 24 +
...ty_v1.projects.locations.global_.hubs.html | 3 +
...ectivity_v1.projects.locations.spokes.html | 12 +
...ha1.projects.locations.internalRanges.html | 16 +
...management_v1beta1.projects.locations.html | 5 +
...projects.locations.vpcFlowLogsConfigs.html | 88 +--
docs/dyn/playintegrity_v1.deviceRecall.html | 2 +-
docs/dyn/playintegrity_v1.v1.html | 4 +-
.../recaptchaenterprise_v1.projects.keys.html | 78 ++
.../redis_v1.projects.locations.clusters.html | 112 ++-
...s_v1beta1.projects.locations.clusters.html | 112 ++-
....projects.locations.catalogs.controls.html | 12 +-
....projects.locations.catalogs.controls.html | 12 +-
....projects.locations.catalogs.controls.html | 12 +-
.../dyn/run_v2.projects.locations.builds.html | 156 ++++
docs/dyn/run_v2.projects.locations.html | 5 +
...v2.projects.locations.jobs.executions.html | 4 +-
...jects.locations.jobs.executions.tasks.html | 4 +-
docs/dyn/run_v2.projects.locations.jobs.html | 10 +-
.../run_v2.projects.locations.services.html | 8 +-
...projects.locations.services.revisions.html | 4 +-
...earchads360_v0.customers.searchAds360.html | 58 +-
..._v1.projects.instanceConfigOperations.html | 2 +-
.../spanner_v1.projects.instanceConfigs.html | 42 +-
...spanner_v1.projects.instances.backups.html | 30 +-
...s.instances.databases.backupSchedules.html | 12 +
docs/dyn/spanner_v1.projects.instances.html | 26 +-
...on_v1.projects.locations.imageImports.html | 12 +-
...ocations.imageImports.imageImportJobs.html | 4 +-
...gration_v1.projects.locations.sources.html | 21 -
...ations.sources.migratingVms.cloneJobs.html | 15 +-
...ions.sources.migratingVms.cutoverJobs.html | 15 +-
...ojects.locations.sources.migratingVms.html | 76 +-
...lpha1.projects.locations.imageImports.html | 12 +-
...ocations.imageImports.imageImportJobs.html | 4 +-
...n_v1alpha1.projects.locations.sources.html | 21 -
...ations.sources.migratingVms.cloneJobs.html | 27 +-
...ions.sources.migratingVms.cutoverJobs.html | 27 +-
...ojects.locations.sources.migratingVms.html | 124 ++-
docs/dyn/youtube_v3.liveChatMessages.html | 2 +-
docs/dyn/youtube_v3.playlists.html | 5 +
.../documents/accesscontextmanager.v1.json | 6 +-
.../documents/aiplatform.v1.json | 253 ++++++-
.../documents/aiplatform.v1beta1.json | 272 ++++++-
.../discovery_cache/documents/alloydb.v1.json | 310 +++++++-
.../documents/alloydb.v1alpha.json | 414 +++++++++-
.../documents/alloydb.v1beta.json | 342 ++++++++-
.../documents/androidpublisher.v3.json | 12 +-
.../discovery_cache/documents/apikeys.v2.json | 4 +-
.../documents/artifactregistry.v1.json | 6 +-
.../documents/artifactregistry.v1beta1.json | 6 +-
.../documents/artifactregistry.v1beta2.json | 6 +-
.../documents/backupdr.v1.json | 19 +-
.../documents/beyondcorp.v1alpha.json | 711 +++++++++++++++++-
.../documents/bigquery.v2.json | 53 +-
.../documents/bigtableadmin.v2.json | 23 +-
.../discovery_cache/documents/chat.v1.json | 10 +-
.../documents/chromemanagement.v1.json | 6 +-
.../documents/clouddeploy.v1.json | 6 +-
.../documents/cloudtasks.v2.json | 6 +-
.../documents/cloudtasks.v2beta2.json | 6 +-
.../documents/cloudtasks.v2beta3.json | 6 +-
.../documents/compute.alpha.json | 491 +-----------
.../documents/compute.beta.json | 67 +-
.../discovery_cache/documents/compute.v1.json | 29 +-
.../documents/connectors.v1.json | 21 +-
.../documents/container.v1.json | 35 +-
.../documents/container.v1beta1.json | 18 +-
.../documents/dataflow.v1b3.json | 29 +-
.../documents/datamigration.v1.json | 138 +++-
.../documents/dialogflow.v2.json | 289 +------
.../documents/dialogflow.v2beta1.json | 289 +------
.../documents/dialogflow.v3.json | 351 ++-------
.../documents/dialogflow.v3beta1.json | 383 +++-------
.../documents/discoveryengine.v1.json | 534 ++++++++++++-
.../documents/discoveryengine.v1alpha.json | 430 ++++++++++-
.../documents/discoveryengine.v1beta.json | 360 ++++++++-
.../documents/displayvideo.v2.json | 45 +-
.../documents/displayvideo.v3.json | 44 +-
.../discovery_cache/documents/dlp.v2.json | 181 +++--
.../documents/documentai.v1.json | 6 +-
.../documents/documentai.v1beta3.json | 6 +-
.../discovery_cache/documents/drive.v2.json | 10 +-
.../discovery_cache/documents/drive.v3.json | 8 +-
.../documents/firebaseml.v2beta.json | 14 +-
.../discovery_cache/documents/gkehub.v1.json | 44 +-
.../documents/gkehub.v1alpha.json | 44 +-
.../documents/gkehub.v1beta.json | 44 +-
.../discovery_cache/documents/gkehub.v2.json | 435 +++++++++++
.../documents/gkehub.v2beta.json | 435 +++++++++++
.../documents/healthcare.v1beta1.json | 49 +-
.../documents/integrations.v1.json | 355 ++-------
.../documents/manufacturers.v1.json | 37 +-
.../documents/migrationcenter.v1alpha1.json | 124 ++-
.../documents/networkconnectivity.v1.json | 16 +-
.../networkconnectivity.v1alpha1.json | 27 +-
.../documents/networkmanagement.v1.json | 10 +-
.../documents/networkmanagement.v1beta1.json | 312 +++++++-
.../documents/playintegrity.v1.json | 8 +-
.../documents/recaptchaenterprise.v1.json | 101 ++-
.../discovery_cache/documents/redis.v1.json | 175 ++++-
.../documents/redis.v1beta1.json | 175 ++++-
.../discovery_cache/documents/retail.v2.json | 4 +-
.../documents/retail.v2alpha.json | 4 +-
.../documents/retail.v2beta.json | 4 +-
.../discovery_cache/documents/run.v2.json | 150 +++-
.../documents/searchads360.v0.json | 88 ++-
.../documents/securitycenter.v1.json | 9 +-
.../documents/securitycenter.v1beta1.json | 9 +-
.../documents/securitycenter.v1beta2.json | 9 +-
.../documents/serviceusage.v1.json | 104 ++-
.../documents/serviceusage.v1beta1.json | 104 ++-
.../discovery_cache/documents/spanner.v1.json | 92 ++-
.../documents/vmmigration.v1.json | 150 ++--
.../documents/vmmigration.v1alpha1.json | 150 ++--
.../discovery_cache/documents/youtube.v3.json | 16 +-
370 files changed, 14899 insertions(+), 5905 deletions(-)
create mode 100644 docs/dyn/beyondcorp_v1alpha.projects.locations.global_.securityGateways.applications.html
create mode 100644 docs/dyn/beyondcorp_v1alpha.projects.locations.global_.securityGateways.hubs.html
create mode 100644 docs/dyn/beyondcorp_v1alpha.projects.locations.securityGateways.applications.html
create mode 100644 docs/dyn/beyondcorp_v1alpha.projects.locations.securityGateways.hubs.html
create mode 100644 docs/dyn/datamigration_v1.projects.locations.migrationJobs.objects.html
create mode 100644 docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.customModels.html
create mode 100644 docs/dyn/discoveryengine_v1alpha.projects.locations.userStores.html
create mode 100644 docs/dyn/discoveryengine_v1alpha.projects.locations.userStores.operations.html
create mode 100644 docs/dyn/gkehub_v2.html
create mode 100644 docs/dyn/gkehub_v2.projects.html
create mode 100644 docs/dyn/gkehub_v2.projects.locations.html
create mode 100644 docs/dyn/gkehub_v2.projects.locations.operations.html
create mode 100644 docs/dyn/gkehub_v2beta.html
create mode 100644 docs/dyn/gkehub_v2beta.projects.html
create mode 100644 docs/dyn/gkehub_v2beta.projects.locations.html
create mode 100644 docs/dyn/gkehub_v2beta.projects.locations.operations.html
create mode 100644 docs/dyn/run_v2.projects.locations.builds.html
create mode 100644 googleapiclient/discovery_cache/documents/gkehub.v2.json
create mode 100644 googleapiclient/discovery_cache/documents/gkehub.v2beta.json
diff --git a/docs/dyn/accesscontextmanager_v1.accessPolicies.servicePerimeters.html b/docs/dyn/accesscontextmanager_v1.accessPolicies.servicePerimeters.html
index 4c57a03e76a..4cb24a15138 100644
--- a/docs/dyn/accesscontextmanager_v1.accessPolicies.servicePerimeters.html
+++ b/docs/dyn/accesscontextmanager_v1.accessPolicies.servicePerimeters.html
@@ -172,7 +172,7 @@
Method Details
"egressPolicies": [ # List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.
{ # Policy for egress from perimeter. EgressPolicies match requests based on `egress_from` and `egress_to` stanzas. For an EgressPolicy to match, both `egress_from` and `egress_to` stanzas must be matched. If an EgressPolicy matches a request, the request is allowed to span the ServicePerimeter boundary. For example, an EgressPolicy can be used to allow VMs on networks within the ServicePerimeter to access a defined set of projects outside the perimeter in certain contexts (e.g. to read data from a Cloud Storage bucket or query against a BigQuery dataset). EgressPolicies are concerned with the *resources* that a request relates as well as the API services and API actions being used. They do not related to the direction of data movement. More detailed documentation for this concept can be found in the descriptions of EgressFrom and EgressTo.
"egressFrom": { # Defines the conditions under which an EgressPolicy matches a request. Conditions based on information about the source of the request. Note that if the destination of the request is also protected by a ServicePerimeter, then that ServicePerimeter must have an IngressPolicy which allows access in order for this request to succeed. # Defines conditions on the source of a request causing this EgressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -207,7 +207,7 @@ Method Details
"ingressPolicies": [ # List of IngressPolicies to apply to the perimeter. A perimeter may have multiple IngressPolicies, each of which is evaluated separately. Access is granted if any Ingress Policy grants it. Must be empty for a perimeter bridge.
{ # Policy for ingress into ServicePerimeter. IngressPolicies match requests based on `ingress_from` and `ingress_to` stanzas. For an ingress policy to match, both the `ingress_from` and `ingress_to` stanzas must be matched. If an IngressPolicy matches a request, the request is allowed through the perimeter boundary from outside the perimeter. For example, access from the internet can be allowed either based on an AccessLevel or, for traffic hosted on Google Cloud, the project of the source network. For access from private networks, using the project of the hosting network is required. Individual ingress policies can be limited by restricting which services and/or actions they match using the `ingress_to` field.
"ingressFrom": { # Defines the conditions under which an IngressPolicy matches a request. Conditions are based on information about the source of the request. The request must satisfy what is defined in `sources` AND identity related fields in order to match. # Defines the conditions on the source of a request causing this IngressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -256,7 +256,7 @@ Method Details
"egressPolicies": [ # List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.
{ # Policy for egress from perimeter. EgressPolicies match requests based on `egress_from` and `egress_to` stanzas. For an EgressPolicy to match, both `egress_from` and `egress_to` stanzas must be matched. If an EgressPolicy matches a request, the request is allowed to span the ServicePerimeter boundary. For example, an EgressPolicy can be used to allow VMs on networks within the ServicePerimeter to access a defined set of projects outside the perimeter in certain contexts (e.g. to read data from a Cloud Storage bucket or query against a BigQuery dataset). EgressPolicies are concerned with the *resources* that a request relates as well as the API services and API actions being used. They do not related to the direction of data movement. More detailed documentation for this concept can be found in the descriptions of EgressFrom and EgressTo.
"egressFrom": { # Defines the conditions under which an EgressPolicy matches a request. Conditions based on information about the source of the request. Note that if the destination of the request is also protected by a ServicePerimeter, then that ServicePerimeter must have an IngressPolicy which allows access in order for this request to succeed. # Defines conditions on the source of a request causing this EgressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -291,7 +291,7 @@ Method Details
"ingressPolicies": [ # List of IngressPolicies to apply to the perimeter. A perimeter may have multiple IngressPolicies, each of which is evaluated separately. Access is granted if any Ingress Policy grants it. Must be empty for a perimeter bridge.
{ # Policy for ingress into ServicePerimeter. IngressPolicies match requests based on `ingress_from` and `ingress_to` stanzas. For an ingress policy to match, both the `ingress_from` and `ingress_to` stanzas must be matched. If an IngressPolicy matches a request, the request is allowed through the perimeter boundary from outside the perimeter. For example, access from the internet can be allowed either based on an AccessLevel or, for traffic hosted on Google Cloud, the project of the source network. For access from private networks, using the project of the hosting network is required. Individual ingress policies can be limited by restricting which services and/or actions they match using the `ingress_to` field.
"ingressFrom": { # Defines the conditions under which an IngressPolicy matches a request. Conditions are based on information about the source of the request. The request must satisfy what is defined in `sources` AND identity related fields in order to match. # Defines the conditions on the source of a request causing this IngressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -426,7 +426,7 @@ Method Details
"egressPolicies": [ # List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.
{ # Policy for egress from perimeter. EgressPolicies match requests based on `egress_from` and `egress_to` stanzas. For an EgressPolicy to match, both `egress_from` and `egress_to` stanzas must be matched. If an EgressPolicy matches a request, the request is allowed to span the ServicePerimeter boundary. For example, an EgressPolicy can be used to allow VMs on networks within the ServicePerimeter to access a defined set of projects outside the perimeter in certain contexts (e.g. to read data from a Cloud Storage bucket or query against a BigQuery dataset). EgressPolicies are concerned with the *resources* that a request relates as well as the API services and API actions being used. They do not related to the direction of data movement. More detailed documentation for this concept can be found in the descriptions of EgressFrom and EgressTo.
"egressFrom": { # Defines the conditions under which an EgressPolicy matches a request. Conditions based on information about the source of the request. Note that if the destination of the request is also protected by a ServicePerimeter, then that ServicePerimeter must have an IngressPolicy which allows access in order for this request to succeed. # Defines conditions on the source of a request causing this EgressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -461,7 +461,7 @@ Method Details
"ingressPolicies": [ # List of IngressPolicies to apply to the perimeter. A perimeter may have multiple IngressPolicies, each of which is evaluated separately. Access is granted if any Ingress Policy grants it. Must be empty for a perimeter bridge.
{ # Policy for ingress into ServicePerimeter. IngressPolicies match requests based on `ingress_from` and `ingress_to` stanzas. For an ingress policy to match, both the `ingress_from` and `ingress_to` stanzas must be matched. If an IngressPolicy matches a request, the request is allowed through the perimeter boundary from outside the perimeter. For example, access from the internet can be allowed either based on an AccessLevel or, for traffic hosted on Google Cloud, the project of the source network. For access from private networks, using the project of the hosting network is required. Individual ingress policies can be limited by restricting which services and/or actions they match using the `ingress_to` field.
"ingressFrom": { # Defines the conditions under which an IngressPolicy matches a request. Conditions are based on information about the source of the request. The request must satisfy what is defined in `sources` AND identity related fields in order to match. # Defines the conditions on the source of a request causing this IngressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -510,7 +510,7 @@ Method Details
"egressPolicies": [ # List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.
{ # Policy for egress from perimeter. EgressPolicies match requests based on `egress_from` and `egress_to` stanzas. For an EgressPolicy to match, both `egress_from` and `egress_to` stanzas must be matched. If an EgressPolicy matches a request, the request is allowed to span the ServicePerimeter boundary. For example, an EgressPolicy can be used to allow VMs on networks within the ServicePerimeter to access a defined set of projects outside the perimeter in certain contexts (e.g. to read data from a Cloud Storage bucket or query against a BigQuery dataset). EgressPolicies are concerned with the *resources* that a request relates as well as the API services and API actions being used. They do not related to the direction of data movement. More detailed documentation for this concept can be found in the descriptions of EgressFrom and EgressTo.
"egressFrom": { # Defines the conditions under which an EgressPolicy matches a request. Conditions based on information about the source of the request. Note that if the destination of the request is also protected by a ServicePerimeter, then that ServicePerimeter must have an IngressPolicy which allows access in order for this request to succeed. # Defines conditions on the source of a request causing this EgressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -545,7 +545,7 @@ Method Details
"ingressPolicies": [ # List of IngressPolicies to apply to the perimeter. A perimeter may have multiple IngressPolicies, each of which is evaluated separately. Access is granted if any Ingress Policy grants it. Must be empty for a perimeter bridge.
{ # Policy for ingress into ServicePerimeter. IngressPolicies match requests based on `ingress_from` and `ingress_to` stanzas. For an ingress policy to match, both the `ingress_from` and `ingress_to` stanzas must be matched. If an IngressPolicy matches a request, the request is allowed through the perimeter boundary from outside the perimeter. For example, access from the internet can be allowed either based on an AccessLevel or, for traffic hosted on Google Cloud, the project of the source network. For access from private networks, using the project of the hosting network is required. Individual ingress policies can be limited by restricting which services and/or actions they match using the `ingress_to` field.
"ingressFrom": { # Defines the conditions under which an IngressPolicy matches a request. Conditions are based on information about the source of the request. The request must satisfy what is defined in `sources` AND identity related fields in order to match. # Defines the conditions on the source of a request causing this IngressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -622,7 +622,7 @@ Method Details
"egressPolicies": [ # List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.
{ # Policy for egress from perimeter. EgressPolicies match requests based on `egress_from` and `egress_to` stanzas. For an EgressPolicy to match, both `egress_from` and `egress_to` stanzas must be matched. If an EgressPolicy matches a request, the request is allowed to span the ServicePerimeter boundary. For example, an EgressPolicy can be used to allow VMs on networks within the ServicePerimeter to access a defined set of projects outside the perimeter in certain contexts (e.g. to read data from a Cloud Storage bucket or query against a BigQuery dataset). EgressPolicies are concerned with the *resources* that a request relates as well as the API services and API actions being used. They do not related to the direction of data movement. More detailed documentation for this concept can be found in the descriptions of EgressFrom and EgressTo.
"egressFrom": { # Defines the conditions under which an EgressPolicy matches a request. Conditions based on information about the source of the request. Note that if the destination of the request is also protected by a ServicePerimeter, then that ServicePerimeter must have an IngressPolicy which allows access in order for this request to succeed. # Defines conditions on the source of a request causing this EgressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -657,7 +657,7 @@ Method Details
"ingressPolicies": [ # List of IngressPolicies to apply to the perimeter. A perimeter may have multiple IngressPolicies, each of which is evaluated separately. Access is granted if any Ingress Policy grants it. Must be empty for a perimeter bridge.
{ # Policy for ingress into ServicePerimeter. IngressPolicies match requests based on `ingress_from` and `ingress_to` stanzas. For an ingress policy to match, both the `ingress_from` and `ingress_to` stanzas must be matched. If an IngressPolicy matches a request, the request is allowed through the perimeter boundary from outside the perimeter. For example, access from the internet can be allowed either based on an AccessLevel or, for traffic hosted on Google Cloud, the project of the source network. For access from private networks, using the project of the hosting network is required. Individual ingress policies can be limited by restricting which services and/or actions they match using the `ingress_to` field.
"ingressFrom": { # Defines the conditions under which an IngressPolicy matches a request. Conditions are based on information about the source of the request. The request must satisfy what is defined in `sources` AND identity related fields in order to match. # Defines the conditions on the source of a request causing this IngressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -706,7 +706,7 @@ Method Details
"egressPolicies": [ # List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.
{ # Policy for egress from perimeter. EgressPolicies match requests based on `egress_from` and `egress_to` stanzas. For an EgressPolicy to match, both `egress_from` and `egress_to` stanzas must be matched. If an EgressPolicy matches a request, the request is allowed to span the ServicePerimeter boundary. For example, an EgressPolicy can be used to allow VMs on networks within the ServicePerimeter to access a defined set of projects outside the perimeter in certain contexts (e.g. to read data from a Cloud Storage bucket or query against a BigQuery dataset). EgressPolicies are concerned with the *resources* that a request relates as well as the API services and API actions being used. They do not related to the direction of data movement. More detailed documentation for this concept can be found in the descriptions of EgressFrom and EgressTo.
"egressFrom": { # Defines the conditions under which an EgressPolicy matches a request. Conditions based on information about the source of the request. Note that if the destination of the request is also protected by a ServicePerimeter, then that ServicePerimeter must have an IngressPolicy which allows access in order for this request to succeed. # Defines conditions on the source of a request causing this EgressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -741,7 +741,7 @@ Method Details
"ingressPolicies": [ # List of IngressPolicies to apply to the perimeter. A perimeter may have multiple IngressPolicies, each of which is evaluated separately. Access is granted if any Ingress Policy grants it. Must be empty for a perimeter bridge.
{ # Policy for ingress into ServicePerimeter. IngressPolicies match requests based on `ingress_from` and `ingress_to` stanzas. For an ingress policy to match, both the `ingress_from` and `ingress_to` stanzas must be matched. If an IngressPolicy matches a request, the request is allowed through the perimeter boundary from outside the perimeter. For example, access from the internet can be allowed either based on an AccessLevel or, for traffic hosted on Google Cloud, the project of the source network. For access from private networks, using the project of the hosting network is required. Individual ingress policies can be limited by restricting which services and/or actions they match using the `ingress_to` field.
"ingressFrom": { # Defines the conditions under which an IngressPolicy matches a request. Conditions are based on information about the source of the request. The request must satisfy what is defined in `sources` AND identity related fields in order to match. # Defines the conditions on the source of a request causing this IngressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -824,7 +824,7 @@ Method Details
"egressPolicies": [ # List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.
{ # Policy for egress from perimeter. EgressPolicies match requests based on `egress_from` and `egress_to` stanzas. For an EgressPolicy to match, both `egress_from` and `egress_to` stanzas must be matched. If an EgressPolicy matches a request, the request is allowed to span the ServicePerimeter boundary. For example, an EgressPolicy can be used to allow VMs on networks within the ServicePerimeter to access a defined set of projects outside the perimeter in certain contexts (e.g. to read data from a Cloud Storage bucket or query against a BigQuery dataset). EgressPolicies are concerned with the *resources* that a request relates as well as the API services and API actions being used. They do not related to the direction of data movement. More detailed documentation for this concept can be found in the descriptions of EgressFrom and EgressTo.
"egressFrom": { # Defines the conditions under which an EgressPolicy matches a request. Conditions based on information about the source of the request. Note that if the destination of the request is also protected by a ServicePerimeter, then that ServicePerimeter must have an IngressPolicy which allows access in order for this request to succeed. # Defines conditions on the source of a request causing this EgressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -859,7 +859,7 @@ Method Details
"ingressPolicies": [ # List of IngressPolicies to apply to the perimeter. A perimeter may have multiple IngressPolicies, each of which is evaluated separately. Access is granted if any Ingress Policy grants it. Must be empty for a perimeter bridge.
{ # Policy for ingress into ServicePerimeter. IngressPolicies match requests based on `ingress_from` and `ingress_to` stanzas. For an ingress policy to match, both the `ingress_from` and `ingress_to` stanzas must be matched. If an IngressPolicy matches a request, the request is allowed through the perimeter boundary from outside the perimeter. For example, access from the internet can be allowed either based on an AccessLevel or, for traffic hosted on Google Cloud, the project of the source network. For access from private networks, using the project of the hosting network is required. Individual ingress policies can be limited by restricting which services and/or actions they match using the `ingress_to` field.
"ingressFrom": { # Defines the conditions under which an IngressPolicy matches a request. Conditions are based on information about the source of the request. The request must satisfy what is defined in `sources` AND identity related fields in order to match. # Defines the conditions on the source of a request causing this IngressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -908,7 +908,7 @@ Method Details
"egressPolicies": [ # List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.
{ # Policy for egress from perimeter. EgressPolicies match requests based on `egress_from` and `egress_to` stanzas. For an EgressPolicy to match, both `egress_from` and `egress_to` stanzas must be matched. If an EgressPolicy matches a request, the request is allowed to span the ServicePerimeter boundary. For example, an EgressPolicy can be used to allow VMs on networks within the ServicePerimeter to access a defined set of projects outside the perimeter in certain contexts (e.g. to read data from a Cloud Storage bucket or query against a BigQuery dataset). EgressPolicies are concerned with the *resources* that a request relates as well as the API services and API actions being used. They do not related to the direction of data movement. More detailed documentation for this concept can be found in the descriptions of EgressFrom and EgressTo.
"egressFrom": { # Defines the conditions under which an EgressPolicy matches a request. Conditions based on information about the source of the request. Note that if the destination of the request is also protected by a ServicePerimeter, then that ServicePerimeter must have an IngressPolicy which allows access in order for this request to succeed. # Defines conditions on the source of a request causing this EgressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -943,7 +943,7 @@ Method Details
"ingressPolicies": [ # List of IngressPolicies to apply to the perimeter. A perimeter may have multiple IngressPolicies, each of which is evaluated separately. Access is granted if any Ingress Policy grants it. Must be empty for a perimeter bridge.
{ # Policy for ingress into ServicePerimeter. IngressPolicies match requests based on `ingress_from` and `ingress_to` stanzas. For an ingress policy to match, both the `ingress_from` and `ingress_to` stanzas must be matched. If an IngressPolicy matches a request, the request is allowed through the perimeter boundary from outside the perimeter. For example, access from the internet can be allowed either based on an AccessLevel or, for traffic hosted on Google Cloud, the project of the source network. For access from private networks, using the project of the hosting network is required. Individual ingress policies can be limited by restricting which services and/or actions they match using the `ingress_to` field.
"ingressFrom": { # Defines the conditions under which an IngressPolicy matches a request. Conditions are based on information about the source of the request. The request must satisfy what is defined in `sources` AND identity related fields in order to match. # Defines the conditions on the source of a request causing this IngressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -1042,7 +1042,7 @@ Method Details
"egressPolicies": [ # List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.
{ # Policy for egress from perimeter. EgressPolicies match requests based on `egress_from` and `egress_to` stanzas. For an EgressPolicy to match, both `egress_from` and `egress_to` stanzas must be matched. If an EgressPolicy matches a request, the request is allowed to span the ServicePerimeter boundary. For example, an EgressPolicy can be used to allow VMs on networks within the ServicePerimeter to access a defined set of projects outside the perimeter in certain contexts (e.g. to read data from a Cloud Storage bucket or query against a BigQuery dataset). EgressPolicies are concerned with the *resources* that a request relates as well as the API services and API actions being used. They do not related to the direction of data movement. More detailed documentation for this concept can be found in the descriptions of EgressFrom and EgressTo.
"egressFrom": { # Defines the conditions under which an EgressPolicy matches a request. Conditions based on information about the source of the request. Note that if the destination of the request is also protected by a ServicePerimeter, then that ServicePerimeter must have an IngressPolicy which allows access in order for this request to succeed. # Defines conditions on the source of a request causing this EgressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -1077,7 +1077,7 @@ Method Details
"ingressPolicies": [ # List of IngressPolicies to apply to the perimeter. A perimeter may have multiple IngressPolicies, each of which is evaluated separately. Access is granted if any Ingress Policy grants it. Must be empty for a perimeter bridge.
{ # Policy for ingress into ServicePerimeter. IngressPolicies match requests based on `ingress_from` and `ingress_to` stanzas. For an ingress policy to match, both the `ingress_from` and `ingress_to` stanzas must be matched. If an IngressPolicy matches a request, the request is allowed through the perimeter boundary from outside the perimeter. For example, access from the internet can be allowed either based on an AccessLevel or, for traffic hosted on Google Cloud, the project of the source network. For access from private networks, using the project of the hosting network is required. Individual ingress policies can be limited by restricting which services and/or actions they match using the `ingress_to` field.
"ingressFrom": { # Defines the conditions under which an IngressPolicy matches a request. Conditions are based on information about the source of the request. The request must satisfy what is defined in `sources` AND identity related fields in order to match. # Defines the conditions on the source of a request causing this IngressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -1126,7 +1126,7 @@ Method Details
"egressPolicies": [ # List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.
{ # Policy for egress from perimeter. EgressPolicies match requests based on `egress_from` and `egress_to` stanzas. For an EgressPolicy to match, both `egress_from` and `egress_to` stanzas must be matched. If an EgressPolicy matches a request, the request is allowed to span the ServicePerimeter boundary. For example, an EgressPolicy can be used to allow VMs on networks within the ServicePerimeter to access a defined set of projects outside the perimeter in certain contexts (e.g. to read data from a Cloud Storage bucket or query against a BigQuery dataset). EgressPolicies are concerned with the *resources* that a request relates as well as the API services and API actions being used. They do not related to the direction of data movement. More detailed documentation for this concept can be found in the descriptions of EgressFrom and EgressTo.
"egressFrom": { # Defines the conditions under which an EgressPolicy matches a request. Conditions based on information about the source of the request. Note that if the destination of the request is also protected by a ServicePerimeter, then that ServicePerimeter must have an IngressPolicy which allows access in order for this request to succeed. # Defines conditions on the source of a request causing this EgressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [EgressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
@@ -1161,7 +1161,7 @@ Method Details
"ingressPolicies": [ # List of IngressPolicies to apply to the perimeter. A perimeter may have multiple IngressPolicies, each of which is evaluated separately. Access is granted if any Ingress Policy grants it. Must be empty for a perimeter bridge.
{ # Policy for ingress into ServicePerimeter. IngressPolicies match requests based on `ingress_from` and `ingress_to` stanzas. For an ingress policy to match, both the `ingress_from` and `ingress_to` stanzas must be matched. If an IngressPolicy matches a request, the request is allowed through the perimeter boundary from outside the perimeter. For example, access from the internet can be allowed either based on an AccessLevel or, for traffic hosted on Google Cloud, the project of the source network. For access from private networks, using the project of the hosting network is required. Individual ingress policies can be limited by restricting which services and/or actions they match using the `ingress_to` field.
"ingressFrom": { # Defines the conditions under which an IngressPolicy matches a request. Conditions are based on information about the source of the request. The request must satisfy what is defined in `sources` AND identity related fields in order to match. # Defines the conditions on the source of a request causing this IngressPolicy to apply.
- "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, `principal`, and `principalSet` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
+ "identities": [ # A list of identities that are allowed access through [IngressPolicy]. Identities can be an individual user, service account, Google group, or third-party identity. For third-party identity, only single identities are supported and other identity types are not supported. The `v1` identities that have the prefix `user`, `group`, `serviceAccount`, and `principal` in https://cloud.google.com/iam/docs/principal-identifiers#v1 are supported.
"A String",
],
"identityType": "A String", # Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of `identities` field will be allowed access.
diff --git a/docs/dyn/aiplatform_v1.projects.locations.endpoints.html b/docs/dyn/aiplatform_v1.projects.locations.endpoints.html
index 9cdfc7cad32..b9f9562066a 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.endpoints.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.endpoints.html
@@ -1276,6 +1276,7 @@ Method Details
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. e.g. gemini-1.5-pro-001.
},
},
+ "seed": 42, # Optional. Seed.
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
@@ -1389,6 +1390,7 @@ Method Details
{ # Response message for [PredictionService.GenerateContent].
"candidates": [ # Output only. Generated candidates.
{ # A response candidate generated from the model.
+ "avgLogprobs": 3.14, # Output only. Average log probability score of the candidate.
"citationMetadata": { # A collection of source attributions for a piece of content. # Output only. Source attribution of the generated content.
"citations": [ # Output only. List of citations.
{ # Source attributions for content.
@@ -1505,7 +1507,7 @@ Method Details
},
"usageMetadata": { # Usage metadata about response(s). # Usage metadata about the response(s).
"candidatesTokenCount": 42, # Number of tokens in the response(s).
- "promptTokenCount": 42, # Number of tokens in the request.
+ "promptTokenCount": 42, # Number of tokens in the request. When `cached_content` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
"totalTokenCount": 42,
},
}
@@ -2910,6 +2912,7 @@ Method Details
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. e.g. gemini-1.5-pro-001.
},
},
+ "seed": 42, # Optional. Seed.
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
@@ -3023,6 +3026,7 @@ Method Details
{ # Response message for [PredictionService.GenerateContent].
"candidates": [ # Output only. Generated candidates.
{ # A response candidate generated from the model.
+ "avgLogprobs": 3.14, # Output only. Average log probability score of the candidate.
"citationMetadata": { # A collection of source attributions for a piece of content. # Output only. Source attribution of the generated content.
"citations": [ # Output only. List of citations.
{ # Source attributions for content.
@@ -3139,7 +3143,7 @@ Method Details
},
"usageMetadata": { # Usage metadata about response(s). # Usage metadata about the response(s).
"candidatesTokenCount": 42, # Number of tokens in the response(s).
- "promptTokenCount": 42, # Number of tokens in the request.
+ "promptTokenCount": 42, # Number of tokens in the request. When `cached_content` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
"totalTokenCount": 42,
},
}
diff --git a/docs/dyn/aiplatform_v1.projects.locations.featureGroups.html b/docs/dyn/aiplatform_v1.projects.locations.featureGroups.html
index 7c0ce32a5c9..1745b5288ef 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.featureGroups.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.featureGroups.html
@@ -128,6 +128,9 @@ Method Details
"entityIdColumns": [ # Optional. Columns to construct entity_id / row keys. If not provided defaults to `entity_id`.
"A String",
],
+ "timeSeries": { # Optional. If the source is a time-series source, this can be set to control how downstream sources (ex: FeatureOnlineStore.FeatureView) will treat time series sources. If not set, will treat the source as a time-series source with feature_timestamp as timestamp column and no scan boundary.
+ "timestampColumn": "A String", # Optional. Column hosting timestamp values for a time-series source. Will be used to determine the latest featureValues for each entity. Optional. If not provided, a feature_timestamp column of type TIMESTAMP will be used.
+ },
},
"createTime": "A String", # Output only. Timestamp when this FeatureGroup was created.
"description": "A String", # Optional. Description of the FeatureGroup.
@@ -227,6 +230,9 @@ Method Details
"entityIdColumns": [ # Optional. Columns to construct entity_id / row keys. If not provided defaults to `entity_id`.
"A String",
],
+ "timeSeries": { # Optional. If the source is a time-series source, this can be set to control how downstream sources (ex: FeatureOnlineStore.FeatureView) will treat time series sources. If not set, will treat the source as a time-series source with feature_timestamp as timestamp column and no scan boundary.
+ "timestampColumn": "A String", # Optional. Column hosting timestamp values for a time-series source. Will be used to determine the latest featureValues for each entity. Optional. If not provided, a feature_timestamp column of type TIMESTAMP will be used.
+ },
},
"createTime": "A String", # Output only. Timestamp when this FeatureGroup was created.
"description": "A String", # Optional. Description of the FeatureGroup.
@@ -267,6 +273,9 @@ Method Details
"entityIdColumns": [ # Optional. Columns to construct entity_id / row keys. If not provided defaults to `entity_id`.
"A String",
],
+ "timeSeries": { # Optional. If the source is a time-series source, this can be set to control how downstream sources (ex: FeatureOnlineStore.FeatureView) will treat time series sources. If not set, will treat the source as a time-series source with feature_timestamp as timestamp column and no scan boundary.
+ "timestampColumn": "A String", # Optional. Column hosting timestamp values for a time-series source. Will be used to determine the latest featureValues for each entity. Optional. If not provided, a feature_timestamp column of type TIMESTAMP will be used.
+ },
},
"createTime": "A String", # Output only. Timestamp when this FeatureGroup was created.
"description": "A String", # Optional. Description of the FeatureGroup.
@@ -313,6 +322,9 @@ Method Details
"entityIdColumns": [ # Optional. Columns to construct entity_id / row keys. If not provided defaults to `entity_id`.
"A String",
],
+ "timeSeries": { # Optional. If the source is a time-series source, this can be set to control how downstream sources (ex: FeatureOnlineStore.FeatureView) will treat time series sources. If not set, will treat the source as a time-series source with feature_timestamp as timestamp column and no scan boundary.
+ "timestampColumn": "A String", # Optional. Column hosting timestamp values for a time-series source. Will be used to determine the latest featureValues for each entity. Optional. If not provided, a feature_timestamp column of type TIMESTAMP will be used.
+ },
},
"createTime": "A String", # Output only. Timestamp when this FeatureGroup was created.
"description": "A String", # Optional. Description of the FeatureGroup.
diff --git a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.featureViewSyncs.html b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.featureViewSyncs.html
index 7f50a0a5ae7..6f4c4e80a2b 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.featureViewSyncs.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.featureViewSyncs.html
@@ -122,6 +122,8 @@ Method Details
"endTime": "A String", # Optional. Exclusive end of the interval. If specified, a Timestamp matching this interval will have to be before the end.
"startTime": "A String", # Optional. Inclusive start of the interval. If specified, a Timestamp matching this interval will have to be the same or after the start.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"syncSummary": { # Summary from the Sync job. For continuous syncs, the summary is updated periodically. For batch syncs, it gets updated on completion of the sync. # Output only. Summary of the sync job.
"rowSynced": "A String", # Output only. Total number of rows synced.
"totalSlot": "A String", # Output only. BigQuery slot milliseconds consumed for the sync job.
@@ -165,6 +167,8 @@ Method Details
"endTime": "A String", # Optional. Exclusive end of the interval. If specified, a Timestamp matching this interval will have to be before the end.
"startTime": "A String", # Optional. Inclusive start of the interval. If specified, a Timestamp matching this interval will have to be the same or after the start.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"syncSummary": { # Summary from the Sync job. For continuous syncs, the summary is updated periodically. For batch syncs, it gets updated on completion of the sync. # Output only. Summary of the sync job.
"rowSynced": "A String", # Output only. Total number of rows synced.
"totalSlot": "A String", # Output only. BigQuery slot milliseconds consumed for the sync job.
diff --git a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html
index 4f891b3ba45..f748e3e2aed 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html
@@ -167,6 +167,8 @@ Method Details
"a_key": "A String",
},
"name": "A String", # Identifier. Name of the FeatureView. Format: `projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}`
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"syncConfig": { # Configuration for Sync. Only one option is set. # Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.
"cron": "A String", # Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * *", or "TZ=America/New_York 1 * * * *".
},
@@ -380,6 +382,8 @@ Method Details
"a_key": "A String",
},
"name": "A String", # Identifier. Name of the FeatureView. Format: `projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}`
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"syncConfig": { # Configuration for Sync. Only one option is set. # Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.
"cron": "A String", # Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * *", or "TZ=America/New_York 1 * * * *".
},
@@ -445,6 +449,8 @@ Method Details
"a_key": "A String",
},
"name": "A String", # Identifier. Name of the FeatureView. Format: `projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}`
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"syncConfig": { # Configuration for Sync. Only one option is set. # Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.
"cron": "A String", # Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * *", or "TZ=America/New_York 1 * * * *".
},
@@ -516,6 +522,8 @@ Method Details
"a_key": "A String",
},
"name": "A String", # Identifier. Name of the FeatureView. Format: `projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}`
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"syncConfig": { # Configuration for Sync. Only one option is set. # Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.
"cron": "A String", # Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * *", or "TZ=America/New_York 1 * * * *".
},
diff --git a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.html b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.html
index 6f2a5d50734..f2c3c15ea8d 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.html
@@ -150,6 +150,8 @@ Method Details
"name": "A String", # Identifier. Name of the FeatureOnlineStore. Format: `projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}`
"optimized": { # Optimized storage type # Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featureOnlineStore.
"updateTime": "A String", # Output only. Timestamp when this FeatureOnlineStore was last updated.
}
@@ -264,6 +266,8 @@ Method Details
"name": "A String", # Identifier. Name of the FeatureOnlineStore. Format: `projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}`
"optimized": { # Optimized storage type # Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featureOnlineStore.
"updateTime": "A String", # Output only. Timestamp when this FeatureOnlineStore was last updated.
}
@@ -319,6 +323,8 @@ Method Details
"name": "A String", # Identifier. Name of the FeatureOnlineStore. Format: `projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}`
"optimized": { # Optimized storage type # Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featureOnlineStore.
"updateTime": "A String", # Output only. Timestamp when this FeatureOnlineStore was last updated.
},
@@ -380,6 +386,8 @@ Method Details
"name": "A String", # Identifier. Name of the FeatureOnlineStore. Format: `projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}`
"optimized": { # Optimized storage type # Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featureOnlineStore.
"updateTime": "A String", # Output only. Timestamp when this FeatureOnlineStore was last updated.
}
diff --git a/docs/dyn/aiplatform_v1.projects.locations.featurestores.entityTypes.html b/docs/dyn/aiplatform_v1.projects.locations.featurestores.entityTypes.html
index 25458f5d4ef..c9fc19e5759 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.featurestores.entityTypes.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.featurestores.entityTypes.html
@@ -173,6 +173,8 @@ Method Details
},
"name": "A String", # Immutable. Name of the EntityType. Format: `projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}` The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
"offlineStorageTtlDays": 42, # Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than `offline_storage_ttl_days` since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this EntityType was most recently updated.
}
@@ -426,6 +428,8 @@ Method Details
},
"name": "A String", # Immutable. Name of the EntityType. Format: `projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}` The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
"offlineStorageTtlDays": 42, # Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than `offline_storage_ttl_days` since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this EntityType was most recently updated.
}
@@ -582,6 +586,8 @@ Method Details
},
"name": "A String", # Immutable. Name of the EntityType. Format: `projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}` The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
"offlineStorageTtlDays": 42, # Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than `offline_storage_ttl_days` since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this EntityType was most recently updated.
},
],
@@ -638,6 +644,8 @@ Method Details
},
"name": "A String", # Immutable. Name of the EntityType. Format: `projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}` The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
"offlineStorageTtlDays": 42, # Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than `offline_storage_ttl_days` since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this EntityType was most recently updated.
}
@@ -676,6 +684,8 @@ Method Details
},
"name": "A String", # Immutable. Name of the EntityType. Format: `projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}` The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
"offlineStorageTtlDays": 42, # Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than `offline_storage_ttl_days` since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this EntityType was most recently updated.
}
diff --git a/docs/dyn/aiplatform_v1.projects.locations.featurestores.html b/docs/dyn/aiplatform_v1.projects.locations.featurestores.html
index 29ebddfe744..adcb9e73a71 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.featurestores.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.featurestores.html
@@ -247,6 +247,8 @@ Method Details
},
},
"onlineStorageTtlDays": 42, # Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than `online_storage_ttl_days` since the feature generation time. Note that `online_storage_ttl_days` should be less than or equal to `offline_storage_ttl_days` for each EntityType under a featurestore. If not set, default to 4000 days
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featurestore.
"updateTime": "A String", # Output only. Timestamp when this Featurestore was last updated.
}
@@ -350,6 +352,8 @@ Method Details
},
},
"onlineStorageTtlDays": 42, # Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than `online_storage_ttl_days` since the feature generation time. Note that `online_storage_ttl_days` should be less than or equal to `offline_storage_ttl_days` for each EntityType under a featurestore. If not set, default to 4000 days
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featurestore.
"updateTime": "A String", # Output only. Timestamp when this Featurestore was last updated.
}
@@ -430,6 +434,8 @@ Method Details
},
},
"onlineStorageTtlDays": 42, # Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than `online_storage_ttl_days` since the feature generation time. Note that `online_storage_ttl_days` should be less than or equal to `offline_storage_ttl_days` for each EntityType under a featurestore. If not set, default to 4000 days
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featurestore.
"updateTime": "A String", # Output only. Timestamp when this Featurestore was last updated.
},
@@ -480,6 +486,8 @@ Method Details
},
},
"onlineStorageTtlDays": 42, # Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than `online_storage_ttl_days` since the feature generation time. Note that `online_storage_ttl_days` should be less than or equal to `offline_storage_ttl_days` for each EntityType under a featurestore. If not set, default to 4000 days
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featurestore.
"updateTime": "A String", # Output only. Timestamp when this Featurestore was last updated.
}
diff --git a/docs/dyn/aiplatform_v1.projects.locations.html b/docs/dyn/aiplatform_v1.projects.locations.html
index c417e44ef72..868316b9dcb 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.html
@@ -310,6 +310,14 @@ Method Details
"version": 42, # Optional. Which version to use for evaluation.
},
},
+ "pairwiseMetricInput": { # Input for pairwise metric. # Input for pairwise metric.
+ "instance": { # Pairwise metric instance. Usually one instance corresponds to one row in an evaluation dataset. # Required. Pairwise metric instance.
+ "jsonInstance": "A String", # Instance specified as a json string. String key-value pairs are expected in the json_instance to render PairwiseMetricSpec.instance_prompt_template.
+ },
+ "metricSpec": { # Spec for pairwise metric. # Required. Spec for pairwise metric.
+ "metricPromptTemplate": "A String", # Required. Metric prompt template for pairwise metric.
+ },
+ },
"pairwiseQuestionAnsweringQualityInput": { # Input for pairwise question answering quality metric. # Input for pairwise question answering quality metric.
"instance": { # Spec for pairwise question answering quality instance. # Required. Pairwise question answering quality instance.
"baselinePrediction": "A String", # Required. Output of the baseline model.
@@ -336,6 +344,14 @@ Method Details
"version": 42, # Optional. Which version to use for evaluation.
},
},
+ "pointwiseMetricInput": { # Input for pointwise metric. # Input for pointwise metric.
+ "instance": { # Pointwise metric instance. Usually one instance corresponds to one row in an evaluation dataset. # Required. Pointwise metric instance.
+ "jsonInstance": "A String", # Instance specified as a json string. String key-value pairs are expected in the json_instance to render PointwiseMetricSpec.instance_prompt_template.
+ },
+ "metricSpec": { # Spec for pointwise metric. # Required. Spec for pointwise metric.
+ "metricPromptTemplate": "A String", # Required. Metric prompt template for pointwise metric.
+ },
+ },
"questionAnsweringCorrectnessInput": { # Input for question answering correctness metric. # Input for question answering correctness metric.
"instance": { # Spec for question answering correctness instance. # Required. Question answering correctness instance.
"context": "A String", # Optional. Text provided as context to answer the question.
@@ -527,6 +543,10 @@ Method Details
"explanation": "A String", # Output only. Explanation for groundedness score.
"score": 3.14, # Output only. Groundedness score.
},
+ "pairwiseMetricResult": { # Spec for pairwise metric result. # Result for pairwise metric.
+ "explanation": "A String", # Output only. Explanation for pairwise metric score.
+ "pairwiseChoice": "A String", # Output only. Pairwise metric choice.
+ },
"pairwiseQuestionAnsweringQualityResult": { # Spec for pairwise question answering quality result. # Result for pairwise question answering quality metric.
"confidence": 3.14, # Output only. Confidence for question answering quality score.
"explanation": "A String", # Output only. Explanation for question answering quality score.
@@ -537,6 +557,10 @@ Method Details
"explanation": "A String", # Output only. Explanation for summarization quality score.
"pairwiseChoice": "A String", # Output only. Pairwise summarization prediction choice.
},
+ "pointwiseMetricResult": { # Spec for pointwise metric result. # Generic metrics. Result for pointwise metric.
+ "explanation": "A String", # Output only. Explanation for pointwise metric score.
+ "score": 3.14, # Output only. Pointwise metric score.
+ },
"questionAnsweringCorrectnessResult": { # Spec for question answering correctness result. # Result for question answering correctness metric.
"confidence": 3.14, # Output only. Confidence for question answering correctness score.
"explanation": "A String", # Output only. Explanation for question answering correctness score.
diff --git a/docs/dyn/aiplatform_v1.projects.locations.indexEndpoints.html b/docs/dyn/aiplatform_v1.projects.locations.indexEndpoints.html
index f74b3d905c9..ede2538fcdc 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.indexEndpoints.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.indexEndpoints.html
@@ -216,6 +216,8 @@ Method Details
},
"publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
"publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}
@@ -607,6 +609,8 @@ Method Details
},
"publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
"publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}
@@ -717,6 +721,8 @@ Method Details
},
"publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
"publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
},
],
@@ -933,6 +939,8 @@ Method Details
},
"publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
"publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}
@@ -1031,6 +1039,8 @@ Method Details
},
"publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
"publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}
diff --git a/docs/dyn/aiplatform_v1.projects.locations.indexes.html b/docs/dyn/aiplatform_v1.projects.locations.indexes.html
index 8d81a7674f2..9724dbfbecc 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.indexes.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.indexes.html
@@ -148,6 +148,8 @@ Method Details
"metadata": "", # An additional information about the Index; the schema of the metadata can be found in metadata_schema.
"metadataSchemaUri": "A String", # Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"name": "A String", # Output only. The resource name of the Index.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.
}
@@ -256,6 +258,8 @@ Method Details
"metadata": "", # An additional information about the Index; the schema of the metadata can be found in metadata_schema.
"metadataSchemaUri": "A String", # Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"name": "A String", # Output only. The resource name of the Index.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.
}
@@ -307,6 +311,8 @@ Method Details
"metadata": "", # An additional information about the Index; the schema of the metadata can be found in metadata_schema.
"metadataSchemaUri": "A String", # Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"name": "A String", # Output only. The resource name of the Index.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.
},
],
@@ -364,6 +370,8 @@ Method Details
"metadata": "", # An additional information about the Index; the schema of the metadata can be found in metadata_schema.
"metadataSchemaUri": "A String", # Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"name": "A String", # Output only. The resource name of the Index.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.
}
diff --git a/docs/dyn/aiplatform_v1.projects.locations.modelDeploymentMonitoringJobs.html b/docs/dyn/aiplatform_v1.projects.locations.modelDeploymentMonitoringJobs.html
index 6dd5a0b2c71..df7a01f0367 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.modelDeploymentMonitoringJobs.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.modelDeploymentMonitoringJobs.html
@@ -260,6 +260,8 @@ Method Details
"nextScheduleTime": "A String", # Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
"predictInstanceSchemaUri": "A String", # YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
"samplePredictInstance": "", # Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"scheduleState": "A String", # Output only. Schedule state when the monitoring job is in Running state.
"state": "A String", # Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
"statsAnomaliesBaseDirectory": { # The Google Cloud Storage location where the output is to be written to. # Stats anomalies base folder path.
@@ -409,6 +411,8 @@ Method Details
"nextScheduleTime": "A String", # Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
"predictInstanceSchemaUri": "A String", # YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
"samplePredictInstance": "", # Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"scheduleState": "A String", # Output only. Schedule state when the monitoring job is in Running state.
"state": "A String", # Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
"statsAnomaliesBaseDirectory": { # The Google Cloud Storage location where the output is to be written to. # Stats anomalies base folder path.
@@ -600,6 +604,8 @@ Method Details
"nextScheduleTime": "A String", # Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
"predictInstanceSchemaUri": "A String", # YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
"samplePredictInstance": "", # Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"scheduleState": "A String", # Output only. Schedule state when the monitoring job is in Running state.
"state": "A String", # Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
"statsAnomaliesBaseDirectory": { # The Google Cloud Storage location where the output is to be written to. # Stats anomalies base folder path.
@@ -762,6 +768,8 @@ Method Details
"nextScheduleTime": "A String", # Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
"predictInstanceSchemaUri": "A String", # YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
"samplePredictInstance": "", # Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"scheduleState": "A String", # Output only. Schedule state when the monitoring job is in Running state.
"state": "A String", # Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
"statsAnomaliesBaseDirectory": { # The Google Cloud Storage location where the output is to be written to. # Stats anomalies base folder path.
@@ -930,6 +938,8 @@ Method Details
"nextScheduleTime": "A String", # Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
"predictInstanceSchemaUri": "A String", # YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
"samplePredictInstance": "", # Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"scheduleState": "A String", # Output only. Schedule state when the monitoring job is in Running state.
"state": "A String", # Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
"statsAnomaliesBaseDirectory": { # The Google Cloud Storage location where the output is to be written to. # Stats anomalies base folder path.
diff --git a/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html b/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html
index c172926f355..25a391b19dc 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html
@@ -408,6 +408,7 @@ Method Details
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. e.g. gemini-1.5-pro-001.
},
},
+ "seed": 42, # Optional. Seed.
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
@@ -521,6 +522,7 @@ Method Details
{ # Response message for [PredictionService.GenerateContent].
"candidates": [ # Output only. Generated candidates.
{ # A response candidate generated from the model.
+ "avgLogprobs": 3.14, # Output only. Average log probability score of the candidate.
"citationMetadata": { # A collection of source attributions for a piece of content. # Output only. Source attribution of the generated content.
"citations": [ # Output only. List of citations.
{ # Source attributions for content.
@@ -637,7 +639,7 @@ Method Details
},
"usageMetadata": { # Usage metadata about response(s). # Usage metadata about the response(s).
"candidatesTokenCount": 42, # Number of tokens in the response(s).
- "promptTokenCount": 42, # Number of tokens in the request.
+ "promptTokenCount": 42, # Number of tokens in the request. When `cached_content` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
"totalTokenCount": 42,
},
}
@@ -993,6 +995,7 @@ Method Details
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. e.g. gemini-1.5-pro-001.
},
},
+ "seed": 42, # Optional. Seed.
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
@@ -1106,6 +1109,7 @@ Method Details
{ # Response message for [PredictionService.GenerateContent].
"candidates": [ # Output only. Generated candidates.
{ # A response candidate generated from the model.
+ "avgLogprobs": 3.14, # Output only. Average log probability score of the candidate.
"citationMetadata": { # A collection of source attributions for a piece of content. # Output only. Source attribution of the generated content.
"citations": [ # Output only. List of citations.
{ # Source attributions for content.
@@ -1222,7 +1226,7 @@ Method Details
},
"usageMetadata": { # Usage metadata about response(s). # Usage metadata about the response(s).
"candidatesTokenCount": 42, # Number of tokens in the response(s).
- "promptTokenCount": 42, # Number of tokens in the request.
+ "promptTokenCount": 42, # Number of tokens in the request. When `cached_content` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
"totalTokenCount": 42,
},
}
diff --git a/docs/dyn/aiplatform_v1.publishers.models.html b/docs/dyn/aiplatform_v1.publishers.models.html
index 2f7d30d21d9..d2e8176854d 100644
--- a/docs/dyn/aiplatform_v1.publishers.models.html
+++ b/docs/dyn/aiplatform_v1.publishers.models.html
@@ -78,7 +78,7 @@ Instance Methods
close()
Close httplib2 connections.
- get(name, isHuggingFaceModel=None, languageCode=None, view=None, x__xgafv=None)
+ get(name, huggingFaceToken=None, isHuggingFaceModel=None, languageCode=None, view=None, x__xgafv=None)
Gets a Model Garden publisher model.
Method Details
@@ -87,11 +87,12 @@
Method Details
-
get(name, isHuggingFaceModel=None, languageCode=None, view=None, x__xgafv=None)
+
get(name, huggingFaceToken=None, isHuggingFaceModel=None, languageCode=None, view=None, x__xgafv=None)
Gets a Model Garden publisher model.
Args:
name: string, Required. The name of the PublisherModel resource. Format: `publishers/{publisher}/models/{publisher_model}` (required)
+ huggingFaceToken: string, Optional. Token used to access Hugging Face gated models.
isHuggingFaceModel: boolean, Optional. Boolean indicates whether the requested model is a Hugging Face model.
languageCode: string, Optional. The IETF BCP-47 language code representing the language in which the publisher model's text information should be written in.
view: string, Optional. PublisherModel view specifying which fields to read.
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html b/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html
index 7eb2eadc11a..6df1bdd5508 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html
@@ -1472,6 +1472,7 @@ Method Details
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. e.g. gemini-1.5-pro-001.
},
},
+ "seed": 42, # Optional. Seed.
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
@@ -1628,6 +1629,7 @@ Method Details
{ # Response message for [PredictionService.GenerateContent].
"candidates": [ # Output only. Generated candidates.
{ # A response candidate generated from the model.
+ "avgLogprobs": 3.14, # Output only. Average log probability score of the candidate.
"citationMetadata": { # A collection of source attributions for a piece of content. # Output only. Source attribution of the generated content.
"citations": [ # Output only. List of citations.
{ # Source attributions for content.
@@ -1747,7 +1749,7 @@ Method Details
},
"usageMetadata": { # Usage metadata about response(s). # Usage metadata about the response(s).
"candidatesTokenCount": 42, # Number of tokens in the response(s).
- "promptTokenCount": 42, # Number of tokens in the request.
+ "promptTokenCount": 42, # Number of tokens in the request. When `cached_content` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
"totalTokenCount": 42,
},
}
@@ -3270,6 +3272,7 @@
Method Details
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. e.g. gemini-1.5-pro-001.
},
},
+ "seed": 42, # Optional. Seed.
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
@@ -3426,6 +3429,7 @@
Method Details
{ # Response message for [PredictionService.GenerateContent].
"candidates": [ # Output only. Generated candidates.
{ # A response candidate generated from the model.
+ "avgLogprobs": 3.14, # Output only. Average log probability score of the candidate.
"citationMetadata": { # A collection of source attributions for a piece of content. # Output only. Source attribution of the generated content.
"citations": [ # Output only. List of citations.
{ # Source attributions for content.
@@ -3545,7 +3549,7 @@
Method Details
},
"usageMetadata": { # Usage metadata about response(s). # Usage metadata about the response(s).
"candidatesTokenCount": 42, # Number of tokens in the response(s).
- "promptTokenCount": 42, # Number of tokens in the request.
+ "promptTokenCount": 42, # Number of tokens in the request. When `cached_content` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
"totalTokenCount": 42,
},
}
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.featureGroups.html b/docs/dyn/aiplatform_v1beta1.projects.locations.featureGroups.html
index 91e140b6b84..f6be1d099ab 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.featureGroups.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.featureGroups.html
@@ -128,6 +128,9 @@
Method Details
"entityIdColumns": [ # Optional. Columns to construct entity_id / row keys. If not provided defaults to `entity_id`.
"A String",
],
+ "timeSeries": { # Optional. If the source is a time-series source, this can be set to control how downstream sources (ex: FeatureOnlineStore.FeatureView) will treat time series sources. If not set, will treat the source as a time-series source with feature_timestamp as timestamp column and no scan boundary.
+ "timestampColumn": "A String", # Optional. Column hosting timestamp values for a time-series source. Will be used to determine the latest featureValues for each entity. Optional. If not provided, a feature_timestamp column of type TIMESTAMP will be used.
+ },
},
"createTime": "A String", # Output only. Timestamp when this FeatureGroup was created.
"description": "A String", # Optional. Description of the FeatureGroup.
@@ -227,6 +230,9 @@
Method Details
"entityIdColumns": [ # Optional. Columns to construct entity_id / row keys. If not provided defaults to `entity_id`.
"A String",
],
+ "timeSeries": { # Optional. If the source is a time-series source, this can be set to control how downstream sources (ex: FeatureOnlineStore.FeatureView) will treat time series sources. If not set, will treat the source as a time-series source with feature_timestamp as timestamp column and no scan boundary.
+ "timestampColumn": "A String", # Optional. Column hosting timestamp values for a time-series source. Will be used to determine the latest featureValues for each entity. Optional. If not provided, a feature_timestamp column of type TIMESTAMP will be used.
+ },
},
"createTime": "A String", # Output only. Timestamp when this FeatureGroup was created.
"description": "A String", # Optional. Description of the FeatureGroup.
@@ -267,6 +273,9 @@
Method Details
"entityIdColumns": [ # Optional. Columns to construct entity_id / row keys. If not provided defaults to `entity_id`.
"A String",
],
+ "timeSeries": { # Optional. If the source is a time-series source, this can be set to control how downstream sources (ex: FeatureOnlineStore.FeatureView) will treat time series sources. If not set, will treat the source as a time-series source with feature_timestamp as timestamp column and no scan boundary.
+ "timestampColumn": "A String", # Optional. Column hosting timestamp values for a time-series source. Will be used to determine the latest featureValues for each entity. Optional. If not provided, a feature_timestamp column of type TIMESTAMP will be used.
+ },
},
"createTime": "A String", # Output only. Timestamp when this FeatureGroup was created.
"description": "A String", # Optional. Description of the FeatureGroup.
@@ -313,6 +322,9 @@
Method Details
"entityIdColumns": [ # Optional. Columns to construct entity_id / row keys. If not provided defaults to `entity_id`.
"A String",
],
+ "timeSeries": { # Optional. If the source is a time-series source, this can be set to control how downstream sources (ex: FeatureOnlineStore.FeatureView) will treat time series sources. If not set, will treat the source as a time-series source with feature_timestamp as timestamp column and no scan boundary.
+ "timestampColumn": "A String", # Optional. Column hosting timestamp values for a time-series source. Will be used to determine the latest featureValues for each entity. Optional. If not provided, a feature_timestamp column of type TIMESTAMP will be used.
+ },
},
"createTime": "A String", # Output only. Timestamp when this FeatureGroup was created.
"description": "A String", # Optional. Description of the FeatureGroup.
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.featureViewSyncs.html b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.featureViewSyncs.html
index 9074155d405..188fc2674e3 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.featureViewSyncs.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.featureViewSyncs.html
@@ -122,6 +122,8 @@
Method Details
"endTime": "A String", # Optional. Exclusive end of the interval. If specified, a Timestamp matching this interval will have to be before the end.
"startTime": "A String", # Optional. Inclusive start of the interval. If specified, a Timestamp matching this interval will have to be the same or after the start.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"syncSummary": { # Summary from the Sync job. For continuous syncs, the summary is updated periodically. For batch syncs, it gets updated on completion of the sync. # Output only. Summary of the sync job.
"rowSynced": "A String", # Output only. Total number of rows synced.
"totalSlot": "A String", # Output only. BigQuery slot milliseconds consumed for the sync job.
@@ -165,6 +167,8 @@
Method Details
"endTime": "A String", # Optional. Exclusive end of the interval. If specified, a Timestamp matching this interval will have to be before the end.
"startTime": "A String", # Optional. Inclusive start of the interval. If specified, a Timestamp matching this interval will have to be the same or after the start.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"syncSummary": { # Summary from the Sync job. For continuous syncs, the summary is updated periodically. For batch syncs, it gets updated on completion of the sync. # Output only. Summary of the sync job.
"rowSynced": "A String", # Output only. Total number of rows synced.
"totalSlot": "A String", # Output only. BigQuery slot milliseconds consumed for the sync job.
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html
index f65bfb6047b..7c0bcfcad67 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html
@@ -179,6 +179,8 @@
Method Details
"a_key": "A String",
},
"name": "A String", # Identifier. Name of the FeatureView. Format: `projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}`
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"serviceAccountEmail": "A String", # Output only. A Service Account unique to this FeatureView. The role bigquery.dataViewer should be granted to this service account to allow Vertex AI Feature Store to sync data to the online store.
"serviceAgentType": "A String", # Optional. Service agent type used during data sync. By default, the Vertex AI Service Agent is used. When using an IAM Policy to isolate this FeatureView within a project, a separate service account should be provisioned by setting this field to `SERVICE_AGENT_TYPE_FEATURE_VIEW`. This will generate a separate service account to access the BigQuery source table.
"syncConfig": { # Configuration for Sync. Only one option is set. # Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.
@@ -410,6 +412,8 @@
Method Details
"a_key": "A String",
},
"name": "A String", # Identifier. Name of the FeatureView. Format: `projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}`
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"serviceAccountEmail": "A String", # Output only. A Service Account unique to this FeatureView. The role bigquery.dataViewer should be granted to this service account to allow Vertex AI Feature Store to sync data to the online store.
"serviceAgentType": "A String", # Optional. Service agent type used during data sync. By default, the Vertex AI Service Agent is used. When using an IAM Policy to isolate this FeatureView within a project, a separate service account should be provisioned by setting this field to `SERVICE_AGENT_TYPE_FEATURE_VIEW`. This will generate a separate service account to access the BigQuery source table.
"syncConfig": { # Configuration for Sync. Only one option is set. # Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.
@@ -526,6 +530,8 @@
Method Details
"a_key": "A String",
},
"name": "A String", # Identifier. Name of the FeatureView. Format: `projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}`
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"serviceAccountEmail": "A String", # Output only. A Service Account unique to this FeatureView. The role bigquery.dataViewer should be granted to this service account to allow Vertex AI Feature Store to sync data to the online store.
"serviceAgentType": "A String", # Optional. Service agent type used during data sync. By default, the Vertex AI Service Agent is used. When using an IAM Policy to isolate this FeatureView within a project, a separate service account should be provisioned by setting this field to `SERVICE_AGENT_TYPE_FEATURE_VIEW`. This will generate a separate service account to access the BigQuery source table.
"syncConfig": { # Configuration for Sync. Only one option is set. # Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.
@@ -613,6 +619,8 @@
Method Details
"a_key": "A String",
},
"name": "A String", # Identifier. Name of the FeatureView. Format: `projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}`
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"serviceAccountEmail": "A String", # Output only. A Service Account unique to this FeatureView. The role bigquery.dataViewer should be granted to this service account to allow Vertex AI Feature Store to sync data to the online store.
"serviceAgentType": "A String", # Optional. Service agent type used during data sync. By default, the Vertex AI Service Agent is used. When using an IAM Policy to isolate this FeatureView within a project, a separate service account should be provisioned by setting this field to `SERVICE_AGENT_TYPE_FEATURE_VIEW`. This will generate a separate service account to access the BigQuery source table.
"syncConfig": { # Configuration for Sync. Only one option is set. # Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.html b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.html
index 9a5f1df9e72..79ad2029bc2 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.html
@@ -162,6 +162,8 @@
Method Details
"name": "A String", # Identifier. Name of the FeatureOnlineStore. Format: `projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}`
"optimized": { # Optimized storage type # Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featureOnlineStore.
"updateTime": "A String", # Output only. Timestamp when this FeatureOnlineStore was last updated.
}
@@ -279,6 +281,8 @@
Method Details
"name": "A String", # Identifier. Name of the FeatureOnlineStore. Format: `projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}`
"optimized": { # Optimized storage type # Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featureOnlineStore.
"updateTime": "A String", # Output only. Timestamp when this FeatureOnlineStore was last updated.
}
@@ -372,6 +376,8 @@
Method Details
"name": "A String", # Identifier. Name of the FeatureOnlineStore. Format: `projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}`
"optimized": { # Optimized storage type # Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featureOnlineStore.
"updateTime": "A String", # Output only. Timestamp when this FeatureOnlineStore was last updated.
},
@@ -436,6 +442,8 @@
Method Details
"name": "A String", # Identifier. Name of the FeatureOnlineStore. Format: `projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}`
"optimized": { # Optimized storage type # Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default.
},
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featureOnlineStore.
"updateTime": "A String", # Output only. Timestamp when this FeatureOnlineStore was last updated.
}
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.entityTypes.html b/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.entityTypes.html
index 15d3c5dfd39..28ac7b81591 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.entityTypes.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.entityTypes.html
@@ -174,6 +174,8 @@
Method Details
},
"name": "A String", # Immutable. Name of the EntityType. Format: `projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}` The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
"offlineStorageTtlDays": 42, # Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than `offline_storage_ttl_days` since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this EntityType was most recently updated.
}
@@ -428,6 +430,8 @@
Method Details
},
"name": "A String", # Immutable. Name of the EntityType. Format: `projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}` The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
"offlineStorageTtlDays": 42, # Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than `offline_storage_ttl_days` since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this EntityType was most recently updated.
}
@@ -585,6 +589,8 @@ Method Details
},
"name": "A String", # Immutable. Name of the EntityType. Format: `projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}` The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
"offlineStorageTtlDays": 42, # Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than `offline_storage_ttl_days` since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this EntityType was most recently updated.
},
],
@@ -642,6 +648,8 @@ Method Details
},
"name": "A String", # Immutable. Name of the EntityType. Format: `projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}` The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
"offlineStorageTtlDays": 42, # Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than `offline_storage_ttl_days` since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this EntityType was most recently updated.
}
@@ -681,6 +689,8 @@ Method Details
},
"name": "A String", # Immutable. Name of the EntityType. Format: `projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}` The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.
"offlineStorageTtlDays": 42, # Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than `offline_storage_ttl_days` since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this EntityType was most recently updated.
}
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.html b/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.html
index d36054ae884..20eb9c5b437 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.html
@@ -247,6 +247,8 @@ Method Details
},
},
"onlineStorageTtlDays": 42, # Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than `online_storage_ttl_days` since the feature generation time. Note that `online_storage_ttl_days` should be less than or equal to `offline_storage_ttl_days` for each EntityType under a featurestore. If not set, default to 4000 days
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featurestore.
"updateTime": "A String", # Output only. Timestamp when this Featurestore was last updated.
}
@@ -350,6 +352,8 @@ Method Details
},
},
"onlineStorageTtlDays": 42, # Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than `online_storage_ttl_days` since the feature generation time. Note that `online_storage_ttl_days` should be less than or equal to `offline_storage_ttl_days` for each EntityType under a featurestore. If not set, default to 4000 days
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featurestore.
"updateTime": "A String", # Output only. Timestamp when this Featurestore was last updated.
}
@@ -438,6 +442,8 @@ Method Details
},
},
"onlineStorageTtlDays": 42, # Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than `online_storage_ttl_days` since the feature generation time. Note that `online_storage_ttl_days` should be less than or equal to `offline_storage_ttl_days` for each EntityType under a featurestore. If not set, default to 4000 days
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featurestore.
"updateTime": "A String", # Output only. Timestamp when this Featurestore was last updated.
},
@@ -488,6 +494,8 @@ Method Details
},
},
"onlineStorageTtlDays": 42, # Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than `online_storage_ttl_days` since the feature generation time. Note that `online_storage_ttl_days` should be less than or equal to `offline_storage_ttl_days` for each EntityType under a featurestore. If not set, default to 4000 days
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"state": "A String", # Output only. State of the featurestore.
"updateTime": "A String", # Output only. Timestamp when this Featurestore was last updated.
}
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.html b/docs/dyn/aiplatform_v1beta1.projects.locations.html
index 7c8dcdad67b..ba128fdc106 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.html
@@ -373,6 +373,14 @@ Method Details
"version": 42, # Optional. Which version to use for evaluation.
},
},
+ "pairwiseMetricInput": { # Input for pairwise metric. # Input for pairwise metric.
+ "instance": { # Pairwise metric instance. Usually one instance corresponds to one row in an evaluation dataset. # Required. Pairwise metric instance.
+ "jsonInstance": "A String", # Instance specified as a json string. String key-value pairs are expected in the json_instance to render PairwiseMetricSpec.instance_prompt_template.
+ },
+ "metricSpec": { # Spec for pairwise metric. # Required. Spec for pairwise metric.
+ "metricPromptTemplate": "A String", # Required. Metric prompt template for pairwise metric.
+ },
+ },
"pairwiseQuestionAnsweringQualityInput": { # Input for pairwise question answering quality metric. # Input for pairwise question answering quality metric.
"instance": { # Spec for pairwise question answering quality instance. # Required. Pairwise question answering quality instance.
"baselinePrediction": "A String", # Required. Output of the baseline model.
@@ -399,6 +407,14 @@ Method Details
"version": 42, # Optional. Which version to use for evaluation.
},
},
+ "pointwiseMetricInput": { # Input for pointwise metric. # Input for pointwise metric.
+ "instance": { # Pointwise metric instance. Usually one instance corresponds to one row in an evaluation dataset. # Required. Pointwise metric instance.
+ "jsonInstance": "A String", # Instance specified as a json string. String key-value pairs are expected in the json_instance to render PointwiseMetricSpec.instance_prompt_template.
+ },
+ "metricSpec": { # Spec for pointwise metric. # Required. Spec for pointwise metric.
+ "metricPromptTemplate": "A String", # Required. Metric prompt template for pointwise metric.
+ },
+ },
"questionAnsweringCorrectnessInput": { # Input for question answering correctness metric. # Input for question answering correctness metric.
"instance": { # Spec for question answering correctness instance. # Required. Question answering correctness instance.
"context": "A String", # Optional. Text provided as context to answer the question.
@@ -590,6 +606,10 @@ Method Details
"explanation": "A String", # Output only. Explanation for groundedness score.
"score": 3.14, # Output only. Groundedness score.
},
+ "pairwiseMetricResult": { # Spec for pairwise metric result. # Result for pairwise metric.
+ "explanation": "A String", # Output only. Explanation for pairwise metric score.
+ "pairwiseChoice": "A String", # Output only. Pairwise metric choice.
+ },
"pairwiseQuestionAnsweringQualityResult": { # Spec for pairwise question answering quality result. # Result for pairwise question answering quality metric.
"confidence": 3.14, # Output only. Confidence for question answering quality score.
"explanation": "A String", # Output only. Explanation for question answering quality score.
@@ -600,6 +620,10 @@ Method Details
"explanation": "A String", # Output only. Explanation for summarization quality score.
"pairwiseChoice": "A String", # Output only. Pairwise summarization prediction choice.
},
+ "pointwiseMetricResult": { # Spec for pointwise metric result. # Generic metrics. Result for pointwise metric.
+ "explanation": "A String", # Output only. Explanation for pointwise metric score.
+ "score": 3.14, # Output only. Pointwise metric score.
+ },
"questionAnsweringCorrectnessResult": { # Spec for question answering correctness result. # Result for question answering correctness metric.
"confidence": 3.14, # Output only. Confidence for question answering correctness score.
"explanation": "A String", # Output only. Explanation for question answering correctness score.
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.indexEndpoints.html b/docs/dyn/aiplatform_v1beta1.projects.locations.indexEndpoints.html
index 311e893086e..93a0fe8c27a 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.indexEndpoints.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.indexEndpoints.html
@@ -216,6 +216,8 @@ Method Details
},
"publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
"publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}
@@ -607,6 +609,8 @@ Method Details
},
"publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
"publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}
@@ -717,6 +721,8 @@ Method Details
},
"publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
"publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
},
],
@@ -933,6 +939,8 @@ Method Details
},
"publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
"publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}
@@ -1031,6 +1039,8 @@ Method Details
},
"publicEndpointDomainName": "A String", # Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
"publicEndpointEnabled": True or False, # Optional. If true, the deployed index will be accessible through public endpoint.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
}
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.indexes.html b/docs/dyn/aiplatform_v1beta1.projects.locations.indexes.html
index b0a32c11f81..0159c441533 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.indexes.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.indexes.html
@@ -148,6 +148,8 @@ Method Details
"metadata": "", # An additional information about the Index; the schema of the metadata can be found in metadata_schema.
"metadataSchemaUri": "A String", # Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"name": "A String", # Output only. The resource name of the Index.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.
}
@@ -256,6 +258,8 @@ Method Details
"metadata": "", # An additional information about the Index; the schema of the metadata can be found in metadata_schema.
"metadataSchemaUri": "A String", # Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"name": "A String", # Output only. The resource name of the Index.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.
}
@@ -307,6 +311,8 @@ Method Details
"metadata": "", # An additional information about the Index; the schema of the metadata can be found in metadata_schema.
"metadataSchemaUri": "A String", # Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"name": "A String", # Output only. The resource name of the Index.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.
},
],
@@ -364,6 +370,8 @@ Method Details
"metadata": "", # An additional information about the Index; the schema of the metadata can be found in metadata_schema.
"metadataSchemaUri": "A String", # Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"name": "A String", # Output only. The resource name of the Index.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"updateTime": "A String", # Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.
}
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.modelDeploymentMonitoringJobs.html b/docs/dyn/aiplatform_v1beta1.projects.locations.modelDeploymentMonitoringJobs.html
index dc660282d84..eb3fcd8a3e2 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.modelDeploymentMonitoringJobs.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.modelDeploymentMonitoringJobs.html
@@ -260,6 +260,8 @@ Method Details
"nextScheduleTime": "A String", # Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
"predictInstanceSchemaUri": "A String", # YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
"samplePredictInstance": "", # Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"scheduleState": "A String", # Output only. Schedule state when the monitoring job is in Running state.
"state": "A String", # Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
"statsAnomaliesBaseDirectory": { # The Google Cloud Storage location where the output is to be written to. # Stats anomalies base folder path.
@@ -409,6 +411,8 @@ Method Details
"nextScheduleTime": "A String", # Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
"predictInstanceSchemaUri": "A String", # YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
"samplePredictInstance": "", # Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"scheduleState": "A String", # Output only. Schedule state when the monitoring job is in Running state.
"state": "A String", # Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
"statsAnomaliesBaseDirectory": { # The Google Cloud Storage location where the output is to be written to. # Stats anomalies base folder path.
@@ -600,6 +604,8 @@ Method Details
"nextScheduleTime": "A String", # Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
"predictInstanceSchemaUri": "A String", # YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
"samplePredictInstance": "", # Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"scheduleState": "A String", # Output only. Schedule state when the monitoring job is in Running state.
"state": "A String", # Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
"statsAnomaliesBaseDirectory": { # The Google Cloud Storage location where the output is to be written to. # Stats anomalies base folder path.
@@ -762,6 +768,8 @@ Method Details
"nextScheduleTime": "A String", # Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
"predictInstanceSchemaUri": "A String", # YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
"samplePredictInstance": "", # Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"scheduleState": "A String", # Output only. Schedule state when the monitoring job is in Running state.
"state": "A String", # Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
"statsAnomaliesBaseDirectory": { # The Google Cloud Storage location where the output is to be written to. # Stats anomalies base folder path.
@@ -930,6 +938,8 @@ Method Details
"nextScheduleTime": "A String", # Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
"predictInstanceSchemaUri": "A String", # YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
"samplePredictInstance": "", # Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
+ "satisfiesPzi": True or False, # Output only. Reserved for future use.
+ "satisfiesPzs": True or False, # Output only. Reserved for future use.
"scheduleState": "A String", # Output only. Schedule state when the monitoring job is in Running state.
"state": "A String", # Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
"statsAnomaliesBaseDirectory": { # The Google Cloud Storage location where the output is to be written to. # Stats anomalies base folder path.
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html b/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html
index 9875dffca29..21a3923de78 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html
@@ -455,6 +455,7 @@ Method Details
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. e.g. gemini-1.5-pro-001.
},
},
+ "seed": 42, # Optional. Seed.
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
@@ -611,6 +612,7 @@ Method Details
{ # Response message for [PredictionService.GenerateContent].
"candidates": [ # Output only. Generated candidates.
{ # A response candidate generated from the model.
+ "avgLogprobs": 3.14, # Output only. Average log probability score of the candidate.
"citationMetadata": { # A collection of source attributions for a piece of content. # Output only. Source attribution of the generated content.
"citations": [ # Output only. List of citations.
{ # Source attributions for content.
@@ -730,7 +732,7 @@ Method Details
},
"usageMetadata": { # Usage metadata about response(s). # Usage metadata about the response(s).
"candidatesTokenCount": 42, # Number of tokens in the response(s).
- "promptTokenCount": 42, # Number of tokens in the request.
+ "promptTokenCount": 42, # Number of tokens in the request. When `cached_content` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
"totalTokenCount": 42,
},
}
@@ -1122,6 +1124,7 @@ Method Details
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. e.g. gemini-1.5-pro-001.
},
},
+ "seed": 42, # Optional. Seed.
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
@@ -1278,6 +1281,7 @@ Method Details
{ # Response message for [PredictionService.GenerateContent].
"candidates": [ # Output only. Generated candidates.
{ # A response candidate generated from the model.
+ "avgLogprobs": 3.14, # Output only. Average log probability score of the candidate.
"citationMetadata": { # A collection of source attributions for a piece of content. # Output only. Source attribution of the generated content.
"citations": [ # Output only. List of citations.
{ # Source attributions for content.
@@ -1397,7 +1401,7 @@ Method Details
},
"usageMetadata": { # Usage metadata about response(s). # Usage metadata about the response(s).
"candidatesTokenCount": 42, # Number of tokens in the response(s).
- "promptTokenCount": 42, # Number of tokens in the request.
+ "promptTokenCount": 42, # Number of tokens in the request. When `cached_content` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
"totalTokenCount": 42,
},
}
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.ragCorpora.ragFiles.html b/docs/dyn/aiplatform_v1beta1.projects.locations.ragCorpora.ragFiles.html
index fe542508d91..c3701ed1def 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.ragCorpora.ragFiles.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.ragCorpora.ragFiles.html
@@ -257,6 +257,10 @@ Method Details
"chunkOverlap": 42, # The overlap between chunks.
"chunkSize": 42, # The size of the chunks.
},
+ "ragFileParsingConfig": { # Specifies the parsing config for RagFiles. # Specifies the parsing config for RagFiles.
+ "parsePdfsUsingOcr": True or False, # Whether to use OCR for PDFs.
+ "useAdvancedPdfParsing": True or False, # Whether to use advanced PDF parsing.
+ },
"slackSource": { # The Slack source for the ImportRagFilesRequest. # Slack channels with their corresponding access tokens.
"channels": [ # Required. The Slack channels.
{ # SlackChannels contains the Slack channels and corresponding access token.
diff --git a/docs/dyn/aiplatform_v1beta1.publishers.models.html b/docs/dyn/aiplatform_v1beta1.publishers.models.html
index cff0ff1eedb..f9ad6aefa71 100644
--- a/docs/dyn/aiplatform_v1beta1.publishers.models.html
+++ b/docs/dyn/aiplatform_v1beta1.publishers.models.html
@@ -78,7 +78,7 @@ Instance Methods
close()
Close httplib2 connections.
- get(name, isHuggingFaceModel=None, languageCode=None, view=None, x__xgafv=None)
+ get(name, huggingFaceToken=None, isHuggingFaceModel=None, languageCode=None, view=None, x__xgafv=None)
Gets a Model Garden publisher model.
list(parent, filter=None, languageCode=None, orderBy=None, pageSize=None, pageToken=None, view=None, x__xgafv=None)
@@ -93,11 +93,12 @@ Method Details
-
get(name, isHuggingFaceModel=None, languageCode=None, view=None, x__xgafv=None)
+
get(name, huggingFaceToken=None, isHuggingFaceModel=None, languageCode=None, view=None, x__xgafv=None)
Gets a Model Garden publisher model.
Args:
name: string, Required. The name of the PublisherModel resource. Format: `publishers/{publisher}/models/{publisher_model}` (required)
+ huggingFaceToken: string, Optional. Token used to access Hugging Face gated models.
isHuggingFaceModel: boolean, Optional. Boolean indicates whether the requested model is a Hugging Face model.
languageCode: string, Optional. The IETF BCP-47 language code representing the language in which the publisher model's text information should be written in.
view: string, Optional. PublisherModel view specifying which fields to read.
diff --git a/docs/dyn/alloydb_v1.projects.locations.clusters.html b/docs/dyn/alloydb_v1.projects.locations.clusters.html
index 3545626d843..c2b8dc17b5d 100644
--- a/docs/dyn/alloydb_v1.projects.locations.clusters.html
+++ b/docs/dyn/alloydb_v1.projects.locations.clusters.html
@@ -260,6 +260,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -436,6 +437,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -661,6 +663,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -817,6 +820,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -982,6 +986,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -1208,6 +1213,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
diff --git a/docs/dyn/alloydb_v1.projects.locations.clusters.instances.html b/docs/dyn/alloydb_v1.projects.locations.clusters.instances.html
index 3c67a440003..52deac2a5d3 100644
--- a/docs/dyn/alloydb_v1.projects.locations.clusters.instances.html
+++ b/docs/dyn/alloydb_v1.projects.locations.clusters.instances.html
@@ -834,6 +834,9 @@ Method Details
The object takes the form of:
{
+ "nodeIds": [ # Optional. Full name of the nodes as obtained from INSTANCE_VIEW_FULL to restart upon. Only applicable for read instances.
+ "A String",
+ ],
"requestId": "A String", # Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).
"validateOnly": True or False, # Optional. If set, performs request validation (e.g. permission checks and any other type of validation), but do not actually execute the restart.
}
diff --git a/docs/dyn/alloydb_v1alpha.projects.locations.clusters.html b/docs/dyn/alloydb_v1alpha.projects.locations.clusters.html
index 777eae99fb3..3ae589ad768 100644
--- a/docs/dyn/alloydb_v1alpha.projects.locations.clusters.html
+++ b/docs/dyn/alloydb_v1alpha.projects.locations.clusters.html
@@ -117,6 +117,9 @@ Instance Methods
switchover(name, body=None, x__xgafv=None)
Switches the role of PRIMARY and SECONDARY cluster without any data loss. This promotes the SECONDARY cluster to PRIMARY and sets up original PRIMARY cluster to replicate from this newly promoted cluster.
+
+ upgrade(name, body=None, x__xgafv=None)
+Upgrades a single Cluster. Imperative only.
Method Details
close()
@@ -170,6 +173,11 @@
Method Details
"backupName": "A String", # Required. The name of the backup resource with the format: * projects/{project}/locations/{region}/backups/{backup_id}
"backupUid": "A String", # Output only. The system-generated UID of the backup which was used to create this resource. The UID is generated when the backup is created, and it is retained until the backup is deleted.
},
+ "cloudsqlBackupRunSource": { # The source CloudSQL backup resource. # Output only. Cluster created from CloudSQL snapshot.
+ "backupRunId": "A String", # Required. The CloudSQL backup run ID.
+ "instanceId": "A String", # Required. The CloudSQL instance ID.
+ "project": "A String", # The project ID of the source CloudSQL instance. This should be the same as the AlloyDB cluster's project.
+ },
"clusterType": "A String", # Output only. The type of the cluster. This is an output-only field and it's populated at the Cluster creation time or the Cluster promotion time. The cluster type is determined by which RPC was used to create the cluster (i.e. `CreateCluster` vs. `CreateSecondaryCluster`
"continuousBackupConfig": { # ContinuousBackupConfig describes the continuous backups recovery configurations of a cluster. # Optional. Continuous backup configuration for this cluster.
"enabled": True or False, # Whether ContinuousBackup is enabled.
@@ -264,6 +272,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -350,6 +359,11 @@ Method Details
"backupName": "A String", # Required. The name of the backup resource with the format: * projects/{project}/locations/{region}/backups/{backup_id}
"backupUid": "A String", # Output only. The system-generated UID of the backup which was used to create this resource. The UID is generated when the backup is created, and it is retained until the backup is deleted.
},
+ "cloudsqlBackupRunSource": { # The source CloudSQL backup resource. # Output only. Cluster created from CloudSQL snapshot.
+ "backupRunId": "A String", # Required. The CloudSQL backup run ID.
+ "instanceId": "A String", # Required. The CloudSQL instance ID.
+ "project": "A String", # The project ID of the source CloudSQL instance. This should be the same as the AlloyDB cluster's project.
+ },
"clusterType": "A String", # Output only. The type of the cluster. This is an output-only field and it's populated at the Cluster creation time or the Cluster promotion time. The cluster type is determined by which RPC was used to create the cluster (i.e. `CreateCluster` vs. `CreateSecondaryCluster`
"continuousBackupConfig": { # ContinuousBackupConfig describes the continuous backups recovery configurations of a cluster. # Optional. Continuous backup configuration for this cluster.
"enabled": True or False, # Whether ContinuousBackup is enabled.
@@ -444,6 +458,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -579,6 +594,11 @@ Method Details
"backupName": "A String", # Required. The name of the backup resource with the format: * projects/{project}/locations/{region}/backups/{backup_id}
"backupUid": "A String", # Output only. The system-generated UID of the backup which was used to create this resource. The UID is generated when the backup is created, and it is retained until the backup is deleted.
},
+ "cloudsqlBackupRunSource": { # The source CloudSQL backup resource. # Output only. Cluster created from CloudSQL snapshot.
+ "backupRunId": "A String", # Required. The CloudSQL backup run ID.
+ "instanceId": "A String", # Required. The CloudSQL instance ID.
+ "project": "A String", # The project ID of the source CloudSQL instance. This should be the same as the AlloyDB cluster's project.
+ },
"clusterType": "A String", # Output only. The type of the cluster. This is an output-only field and it's populated at the Cluster creation time or the Cluster promotion time. The cluster type is determined by which RPC was used to create the cluster (i.e. `CreateCluster` vs. `CreateSecondaryCluster`
"continuousBackupConfig": { # ContinuousBackupConfig describes the continuous backups recovery configurations of a cluster. # Optional. Continuous backup configuration for this cluster.
"enabled": True or False, # Whether ContinuousBackup is enabled.
@@ -673,6 +693,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -739,6 +760,11 @@ Method Details
"backupName": "A String", # Required. The name of the backup resource with the format: * projects/{project}/locations/{region}/backups/{backup_id}
"backupUid": "A String", # Output only. The system-generated UID of the backup which was used to create this resource. The UID is generated when the backup is created, and it is retained until the backup is deleted.
},
+ "cloudsqlBackupRunSource": { # The source CloudSQL backup resource. # Output only. Cluster created from CloudSQL snapshot.
+ "backupRunId": "A String", # Required. The CloudSQL backup run ID.
+ "instanceId": "A String", # Required. The CloudSQL instance ID.
+ "project": "A String", # The project ID of the source CloudSQL instance. This should be the same as the AlloyDB cluster's project.
+ },
"clusterType": "A String", # Output only. The type of the cluster. This is an output-only field and it's populated at the Cluster creation time or the Cluster promotion time. The cluster type is determined by which RPC was used to create the cluster (i.e. `CreateCluster` vs. `CreateSecondaryCluster`
"continuousBackupConfig": { # ContinuousBackupConfig describes the continuous backups recovery configurations of a cluster. # Optional. Continuous backup configuration for this cluster.
"enabled": True or False, # Whether ContinuousBackup is enabled.
@@ -833,6 +859,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -908,6 +935,11 @@ Method Details
"backupName": "A String", # Required. The name of the backup resource with the format: * projects/{project}/locations/{region}/backups/{backup_id}
"backupUid": "A String", # Output only. The system-generated UID of the backup which was used to create this resource. The UID is generated when the backup is created, and it is retained until the backup is deleted.
},
+ "cloudsqlBackupRunSource": { # The source CloudSQL backup resource. # Output only. Cluster created from CloudSQL snapshot.
+ "backupRunId": "A String", # Required. The CloudSQL backup run ID.
+ "instanceId": "A String", # Required. The CloudSQL instance ID.
+ "project": "A String", # The project ID of the source CloudSQL instance. This should be the same as the AlloyDB cluster's project.
+ },
"clusterType": "A String", # Output only. The type of the cluster. This is an output-only field and it's populated at the Cluster creation time or the Cluster promotion time. The cluster type is determined by which RPC was used to create the cluster (i.e. `CreateCluster` vs. `CreateSecondaryCluster`
"continuousBackupConfig": { # ContinuousBackupConfig describes the continuous backups recovery configurations of a cluster. # Optional. Continuous backup configuration for this cluster.
"enabled": True or False, # Whether ContinuousBackup is enabled.
@@ -1002,6 +1034,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -1138,6 +1171,11 @@ Method Details
"backupName": "A String", # Required. The name of the backup resource with the format: * projects/{project}/locations/{region}/backups/{backup_id}
"backupUid": "A String", # Output only. The system-generated UID of the backup which was used to create this resource. The UID is generated when the backup is created, and it is retained until the backup is deleted.
},
+ "cloudsqlBackupRunSource": { # The source CloudSQL backup resource. # Output only. Cluster created from CloudSQL snapshot.
+ "backupRunId": "A String", # Required. The CloudSQL backup run ID.
+ "instanceId": "A String", # Required. The CloudSQL instance ID.
+ "project": "A String", # The project ID of the source CloudSQL instance. This should be the same as the AlloyDB cluster's project.
+ },
"clusterType": "A String", # Output only. The type of the cluster. This is an output-only field and it's populated at the Cluster creation time or the Cluster promotion time. The cluster type is determined by which RPC was used to create the cluster (i.e. `CreateCluster` vs. `CreateSecondaryCluster`
"continuousBackupConfig": { # ContinuousBackupConfig describes the continuous backups recovery configurations of a cluster. # Optional. Continuous backup configuration for this cluster.
"enabled": True or False, # Whether ContinuousBackup is enabled.
@@ -1232,6 +1270,7 @@ Method Details
"subscriptionType": "A String", # Optional. Subscription type of the cluster.
"trialMetadata": { # Contains information and all metadata related to TRIAL clusters. # Output only. Metadata for free trial clusters
"endTime": "A String", # End time of the trial cluster.
+ "graceEndTime": "A String", # grace end time of the cluster.
"startTime": "A String", # start time of the trial cluster.
"upgradeTime": "A String", # Upgrade time of trial cluster to Standard cluster.
},
@@ -1319,4 +1358,49 @@ Method Details
}
+
+
upgrade(name, body=None, x__xgafv=None)
+
Upgrades a single Cluster. Imperative only.
+
+Args:
+ name: string, Required. The resource name of the cluster. (required)
+ body: object, The request body.
+ The object takes the form of:
+
+{ # Upgrades a cluster.
+ "etag": "A String", # Optional. The current etag of the Cluster. If an etag is provided and does not match the current etag of the Cluster, upgrade will be blocked and an ABORTED error will be returned.
+ "requestId": "A String", # Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).
+ "validateOnly": True or False, # Optional. If set, performs request validation (e.g. permission checks and any other type of validation), but does not actually execute the upgrade.
+ "version": "A String", # Required. The version the cluster is going to be upgraded to.
+}
+
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
+
+Returns:
+ An object of the form:
+
+ { # This resource represents a long-running operation that is the result of a network API call.
+ "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+ "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+ "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+ "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+ {
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ ],
+ "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+ },
+ "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+ "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+}
+
+