diff --git a/.reuse/dep5 b/.reuse/dep5 index ddfb05b2e..dc5965bed 100644 --- a/.reuse/dep5 +++ b/.reuse/dep5 @@ -1179,6 +1179,10 @@ Files: crd-catalog/spotahome/redis-operator/* Copyright: The spotahome/redis-operator Authors License: Apache-2.0 +Files: crd-catalog/stackabletech/trino-operator/* +Copyright: The stackabletech/trino-operator Authors +License: OSL-3.0 + Files: crd-catalog/storageos/operator/* Copyright: The storageos/operator Authors License: Apache-2.0 diff --git a/LICENSES/OSL-3.0.txt b/LICENSES/OSL-3.0.txt new file mode 100644 index 000000000..14acdb833 --- /dev/null +++ b/LICENSES/OSL-3.0.txt @@ -0,0 +1,47 @@ +Open Software License v. 3.0 (OSL-3.0) + +This Open Software License (the "License") applies to any original work of authorship (the "Original Work") whose owner (the "Licensor") has placed the following licensing notice adjacent to the copyright notice for the Original Work: + + Licensed under the Open Software License version 3.0 + +1) Grant of Copyright License. Licensor grants You a worldwide, royalty-free, non-exclusive, sublicensable license, for the duration of the copyright, to do the following: + + a) to reproduce the Original Work in copies, either alone or as part of a collective work; + + b) to translate, adapt, alter, transform, modify, or arrange the Original Work, thereby creating derivative works ("Derivative Works") based upon the Original Work; + + c) to distribute or communicate copies of the Original Work and Derivative Works to the public, with the proviso that copies of Original Work or Derivative Works that You distribute or communicate shall be licensed under this Open Software License; + + d) to perform the Original Work publicly; and + + e) to display the Original Work publicly. + +2) Grant of Patent License. Licensor grants You a worldwide, royalty-free, non-exclusive, sublicensable license, under patent claims owned or controlled by the Licensor that are embodied in the Original Work as furnished by the Licensor, for the duration of the patents, to make, use, sell, offer for sale, have made, and import the Original Work and Derivative Works. + +3) Grant of Source Code License. The term "Source Code" means the preferred form of the Original Work for making modifications to it and all available documentation describing how to modify the Original Work. Licensor agrees to provide a machine-readable copy of the Source Code of the Original Work along with each copy of the Original Work that Licensor distributes. Licensor reserves the right to satisfy this obligation by placing a machine-readable copy of the Source Code in an information repository reasonably calculated to permit inexpensive and convenient access by You for as long as Licensor continues to distribute the Original Work. + +4) Exclusions From License Grant. Neither the names of Licensor, nor the names of any contributors to the Original Work, nor any of their trademarks or service marks, may be used to endorse or promote products derived from this Original Work without express prior permission of the Licensor. Except as expressly stated herein, nothing in this License grants any license to Licensor’s trademarks, copyrights, patents, trade secrets or any other intellectual property. No patent license is granted to make, use, sell, offer for sale, have made, or import embodiments of any patent claims other than the licensed claims defined in Section 2. No license is granted to the trademarks of Licensor even if such marks are included in the Original Work. Nothing in this License shall be interpreted to prohibit Licensor from licensing under terms different from this License any Original Work that Licensor otherwise would have a right to license. + +5) External Deployment. The term "External Deployment" means the use, distribution, or communication of the Original Work or Derivative Works in any way such that the Original Work or Derivative Works may be used by anyone other than You, whether those works are distributed or communicated to those persons or made available as an application intended for use over a network. As an express condition for the grants of license hereunder, You must treat any External Deployment by You of the Original Work or a Derivative Work as a distribution under section 1(c). + +6) Attribution Rights. You must retain, in the Source Code of any Derivative Works that You create, all copyright, patent, or trademark notices from the Source Code of the Original Work, as well as any notices of licensing and any descriptive text identified therein as an "Attribution Notice." You must cause the Source Code for any Derivative Works that You create to carry a prominent Attribution Notice reasonably calculated to inform recipients that You have modified the Original Work. + +7) Warranty of Provenance and Disclaimer of Warranty. Licensor warrants that the copyright in and to the Original Work and the patent rights granted herein by Licensor are owned by the Licensor or are sublicensed to You under the terms of this License with the permission of the contributor(s) of those copyrights and patent rights. Except as expressly stated in the immediately preceding sentence, the Original Work is provided under this License on an "AS IS" BASIS and WITHOUT WARRANTY, either express or implied, including, without limitation, the warranties of non-infringement, merchantability or fitness for a particular purpose. THE ENTIRE RISK AS TO THE QUALITY OF THE ORIGINAL WORK IS WITH YOU. This DISCLAIMER OF WARRANTY constitutes an essential part of this License. No license to the Original Work is granted by this License except under this disclaimer. + +8) Limitation of Liability. Under no circumstances and under no legal theory, whether in tort (including negligence), contract, or otherwise, shall the Licensor be liable to anyone for any indirect, special, incidental, or consequential damages of any character arising as a result of this License or the use of the Original Work including, without limitation, damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses. This limitation of liability shall not apply to the extent applicable law prohibits such limitation. + +9) Acceptance and Termination. If, at any time, You expressly assented to this License, that assent indicates your clear and irrevocable acceptance of this License and all of its terms and conditions. If You distribute or communicate copies of the Original Work or a Derivative Work, You must make a reasonable effort under the circumstances to obtain the express assent of recipients to the terms of this License. This License conditions your rights to undertake the activities listed in Section 1, including your right to create Derivative Works based upon the Original Work, and doing so without honoring these terms and conditions is prohibited by copyright law and international treaty. Nothing in this License is intended to affect copyright exceptions and limitations (including “fair use” or “fair dealing”). This License shall terminate immediately and You may no longer exercise any of the rights granted to You by this License upon your failure to honor the conditions in Section 1(c). + +10) Termination for Patent Action. This License shall terminate automatically and You may no longer exercise any of the rights granted to You by this License as of the date You commence an action, including a cross-claim or counterclaim, against Licensor or any licensee alleging that the Original Work infringes a patent. This termination provision shall not apply for an action alleging patent infringement by combinations of the Original Work with other software or hardware. + +11) Jurisdiction, Venue and Governing Law. Any action or suit relating to this License may be brought only in the courts of a jurisdiction wherein the Licensor resides or in which Licensor conducts its primary business, and under the laws of that jurisdiction excluding its conflict-of-law provisions. The application of the United Nations Convention on Contracts for the International Sale of Goods is expressly excluded. Any use of the Original Work outside the scope of this License or after its termination shall be subject to the requirements and penalties of copyright or patent law in the appropriate jurisdiction. This section shall survive the termination of this License. + +12) Attorneys' Fees. In any action to enforce the terms of this License or seeking damages relating thereto, the prevailing party shall be entitled to recover its costs and expenses, including, without limitation, reasonable attorneys' fees and costs incurred in connection with such action, including any appeal of such action. This section shall survive the termination of this License. + +13) Miscellaneous. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. + +14) Definition of "You" in This License. "You" throughout this License, whether in upper or lower case, means an individual or a legal entity exercising rights under, and complying with all of the terms of, this License. For legal entities, "You" includes any entity that controls, is controlled by, or is under common control with you. For purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. + +15) Right to Use. You may use the Original Work in all ways not otherwise restricted or conditioned by this License or by law, and Licensor promises not to interfere with or be responsible for such uses by You. + +16) Modification of This License. This License is Copyright (c) 2005 Lawrence Rosen. Permission is granted to copy, distribute, or communicate this License without modification. Nothing in this License permits You to modify this License as applied to the Original Work or to Derivative Works. However, You may modify the text of this License and copy, distribute or communicate your modified version (the "Modified License") and apply it to other original works of authorship subject to the following conditions: (i) You may not indicate in any way that your Modified License is the "Open Software License" or "OSL" and you may not use those names in the name of your Modified License; (ii) You must replace the notice specified in the first paragraph above with the notice "Licensed under " or with a notice of your own that is not confusingly similar to the notice in this License; and (iii) You may not claim that your original works are open source software unless your Modified License has been approved by Open Source Initiative (OSI) and You comply with its license review and certification process. diff --git a/code-generator/src/catalog.rs b/code-generator/src/catalog.rs index 50ea8b5e1..c0551f9a7 100644 --- a/code-generator/src/catalog.rs +++ b/code-generator/src/catalog.rs @@ -2932,6 +2932,13 @@ pub const CRD_V1_SOURCES: &'static [UpstreamSource] = &[ "https://github.com/spotahome/redis-operator/blob/master/manifests/databases.spotahome.com_redisfailovers.yaml", ], }, + UpstreamSource { + project_name: "stackabletech/trino-operator", + license: OSL_V3, + urls: &[ + "https://github.com/stackabletech/trino-operator/blob/main/deploy/helm/trino-operator/crds/crds.yaml", + ], + }, UpstreamSource { project_name: "storageos/operator", license: APACHE_V2, @@ -3176,3 +3183,4 @@ const UPL_V1: &'static str = "UPL-1.0"; const MPL_V2: &'static str = "MPL-2.0"; const KUBEMOD: &'static str = "LicenseRef-Kubemod"; const HASHICORP: &'static str = "LicenseRef-HashiCorp"; +const OSL_V3: &'static str = "OSL-3.0"; diff --git a/crd-catalog/stackabletech/trino-operator/trino.stackable.tech/v1alpha1/trinocatalogs.yaml b/crd-catalog/stackabletech/trino-operator/trino.stackable.tech/v1alpha1/trinocatalogs.yaml new file mode 100644 index 000000000..f81e73740 --- /dev/null +++ b/crd-catalog/stackabletech/trino-operator/trino.stackable.tech/v1alpha1/trinocatalogs.yaml @@ -0,0 +1,536 @@ +apiVersion: "apiextensions.k8s.io/v1" +kind: "CustomResourceDefinition" +metadata: + annotations: + helm.sh/resource-policy: "keep" + name: "trinocatalogs.trino.stackable.tech" +spec: + group: "trino.stackable.tech" + names: + categories: [] + kind: "TrinoCatalog" + plural: "trinocatalogs" + shortNames: [] + singular: "trinocatalog" + scope: "Namespaced" + versions: + - additionalPrinterColumns: [] + name: "v1alpha1" + schema: + openAPIV3Schema: + description: "Auto-generated derived type for TrinoCatalogSpec via `CustomResource`" + properties: + spec: + description: "The TrinoCatalog resource can be used to define catalogs in Kubernetes objects. Read more about it in the [Trino operator concept docs](https://docs.stackable.tech/home/nightly/trino/concepts) and the [Trino operator usage guide](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/). The documentation also contains a list of all the supported backends." + properties: + configOverrides: + additionalProperties: + type: "string" + default: {} + description: "The `configOverrides` allow overriding arbitrary Trino settings. For example, for Hive you could add `hive.metastore.username: trino`." + type: "object" + connector: + description: "The `connector` defines which connector is used." + oneOf: + - required: + - "blackHole" + - required: + - "deltaLake" + - required: + - "googleSheet" + - required: + - "generic" + - required: + - "hive" + - required: + - "iceberg" + - required: + - "tpcds" + - required: + - "tpch" + properties: + blackHole: + description: "A [Black Hole](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/black-hole) connector." + type: "object" + deltaLake: + description: "An [Delta Lake](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/delta-lake) connector." + properties: + hdfs: + description: "Connection to an HDFS cluster. Please make sure that the underlying Hive metastore also has access to the HDFS." + nullable: true + properties: + configMap: + description: "Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the HDFS cluster." + type: "string" + required: + - "configMap" + type: "object" + metastore: + description: "Mandatory connection to a Hive Metastore, which will be used as a storage for metadata." + properties: + configMap: + description: "Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the Hive metastore." + type: "string" + required: + - "configMap" + type: "object" + s3: + description: "Connection to an S3 store. Please make sure that the underlying Hive metastore also has access to the S3 store. Learn more about S3 configuration in the [S3 concept docs](https://docs.stackable.tech/home/nightly/concepts/s3)." + nullable: true + oneOf: + - required: + - "inline" + - required: + - "reference" + properties: + inline: + description: "Inline definition of an S3 connection." + properties: + accessStyle: + description: "Which access style to use. Defaults to virtual hosted-style as most of the data products out there. Have a look at the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html)." + enum: + - "Path" + - "VirtualHosted" + nullable: true + type: "string" + credentials: + description: "If the S3 uses authentication you have to specify you S3 credentials. In the most cases a [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) providing `accessKey` and `secretKey` is sufficient." + nullable: true + properties: + scope: + description: "[Scope](https://docs.stackable.tech/home/nightly/secret-operator/scope) of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass)." + nullable: true + properties: + node: + default: false + description: "The node scope is resolved to the name of the Kubernetes Node object that the Pod is running on. This will typically be the DNS name of the node." + type: "boolean" + pod: + default: false + description: "The pod scope is resolved to the name of the Kubernetes Pod. This allows the secret to differentiate between StatefulSet replicas." + type: "boolean" + services: + default: [] + description: "The service scope allows Pod objects to specify custom scopes. This should typically correspond to Service objects that the Pod participates in." + items: + type: "string" + type: "array" + type: "object" + secretClass: + description: "[SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) containing the LDAP bind credentials." + type: "string" + required: + - "secretClass" + type: "object" + host: + description: "Hostname of the S3 server without any protocol or port. For example: `west1.my-cloud.com`." + nullable: true + type: "string" + port: + description: "Port the S3 server listens on. If not specified the product will determine the port to use." + format: "uint16" + minimum: 0.0 + nullable: true + type: "integer" + tls: + description: "If you want to use TLS when talking to S3 you can enable TLS encrypted communication with this setting." + nullable: true + properties: + verification: + description: "The verification method used to verify the certificates of the server and/or the client." + oneOf: + - required: + - "none" + - required: + - "server" + properties: + none: + description: "Use TLS but don't verify certificates." + type: "object" + server: + description: "Use TLS and a CA certificate to verify the server." + properties: + caCert: + description: "CA cert to verify the server." + oneOf: + - required: + - "webPki" + - required: + - "secretClass" + properties: + secretClass: + description: "Name of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) which will provide the CA certificate. Note that a SecretClass does not need to have a key but can also work with just a CA certificate, so if you got provided with a CA cert but don't have access to the key you can still use this method." + type: "string" + webPki: + description: "Use TLS and the CA certificates trusted by the common web browsers to verify the server. This can be useful when you e.g. use public AWS S3 or other public available services." + type: "object" + type: "object" + required: + - "caCert" + type: "object" + type: "object" + required: + - "verification" + type: "object" + type: "object" + reference: + description: "A reference to an S3Connection resource." + type: "string" + type: "object" + required: + - "metastore" + type: "object" + generic: + description: "A [generic](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/generic) connector." + properties: + connectorName: + description: "Name of the Trino connector. Will be passed to `connector.name`." + type: "string" + properties: + additionalProperties: + oneOf: + - required: + - "value" + - required: + - "valueFromSecret" + - required: + - "valueFromConfigMap" + properties: + value: + type: "string" + valueFromConfigMap: + description: "Selects a key from a ConfigMap." + properties: + key: + description: "The key to select." + type: "string" + name: + description: "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names" + type: "string" + optional: + description: "Specify whether the ConfigMap or its key must be defined" + type: "boolean" + required: + - "key" + type: "object" + valueFromSecret: + description: "SecretKeySelector selects a key of a Secret." + properties: + key: + description: "The key of the secret to select from. Must be a valid secret key." + type: "string" + name: + description: "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names" + type: "string" + optional: + description: "Specify whether the Secret or its key must be defined" + type: "boolean" + required: + - "key" + type: "object" + type: "object" + default: {} + description: "A map of properties to put in the connector configuration file. They can be specified either as a raw value or be read from a Secret or ConfigMap." + type: "object" + required: + - "connectorName" + type: "object" + googleSheet: + description: "A [Google sheets](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/google-sheets) connector." + properties: + cache: + description: "Cache the contents of sheets. This is used to reduce Google Sheets API usage and latency." + nullable: true + properties: + sheetsDataExpireAfterWrite: + description: "How long to cache spreadsheet data or metadata, defaults to `5m`." + nullable: true + type: "string" + sheetsDataMaxCacheSize: + description: "Maximum number of spreadsheets to cache, defaults to 1000." + nullable: true + type: "string" + type: "object" + credentialsSecret: + description: "The Secret containing the Google API JSON key file. The key used from the Secret is `credentials`." + type: "string" + metadataSheetId: + description: "Sheet ID of the spreadsheet, that contains the table mapping." + type: "string" + required: + - "credentialsSecret" + - "metadataSheetId" + type: "object" + hive: + description: "An [Apache Hive](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/hive) connector." + properties: + hdfs: + description: "Connection to an HDFS cluster. Please make sure that the underlying Hive metastore also has access to the HDFS." + nullable: true + properties: + configMap: + description: "Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the HDFS cluster." + type: "string" + required: + - "configMap" + type: "object" + metastore: + description: "Mandatory connection to a Hive Metastore, which will be used as a storage for metadata." + properties: + configMap: + description: "Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the Hive metastore." + type: "string" + required: + - "configMap" + type: "object" + s3: + description: "Connection to an S3 store. Please make sure that the underlying Hive metastore also has access to the S3 store. Learn more about S3 configuration in the [S3 concept docs](https://docs.stackable.tech/home/nightly/concepts/s3)." + nullable: true + oneOf: + - required: + - "inline" + - required: + - "reference" + properties: + inline: + description: "Inline definition of an S3 connection." + properties: + accessStyle: + description: "Which access style to use. Defaults to virtual hosted-style as most of the data products out there. Have a look at the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html)." + enum: + - "Path" + - "VirtualHosted" + nullable: true + type: "string" + credentials: + description: "If the S3 uses authentication you have to specify you S3 credentials. In the most cases a [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) providing `accessKey` and `secretKey` is sufficient." + nullable: true + properties: + scope: + description: "[Scope](https://docs.stackable.tech/home/nightly/secret-operator/scope) of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass)." + nullable: true + properties: + node: + default: false + description: "The node scope is resolved to the name of the Kubernetes Node object that the Pod is running on. This will typically be the DNS name of the node." + type: "boolean" + pod: + default: false + description: "The pod scope is resolved to the name of the Kubernetes Pod. This allows the secret to differentiate between StatefulSet replicas." + type: "boolean" + services: + default: [] + description: "The service scope allows Pod objects to specify custom scopes. This should typically correspond to Service objects that the Pod participates in." + items: + type: "string" + type: "array" + type: "object" + secretClass: + description: "[SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) containing the LDAP bind credentials." + type: "string" + required: + - "secretClass" + type: "object" + host: + description: "Hostname of the S3 server without any protocol or port. For example: `west1.my-cloud.com`." + nullable: true + type: "string" + port: + description: "Port the S3 server listens on. If not specified the product will determine the port to use." + format: "uint16" + minimum: 0.0 + nullable: true + type: "integer" + tls: + description: "If you want to use TLS when talking to S3 you can enable TLS encrypted communication with this setting." + nullable: true + properties: + verification: + description: "The verification method used to verify the certificates of the server and/or the client." + oneOf: + - required: + - "none" + - required: + - "server" + properties: + none: + description: "Use TLS but don't verify certificates." + type: "object" + server: + description: "Use TLS and a CA certificate to verify the server." + properties: + caCert: + description: "CA cert to verify the server." + oneOf: + - required: + - "webPki" + - required: + - "secretClass" + properties: + secretClass: + description: "Name of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) which will provide the CA certificate. Note that a SecretClass does not need to have a key but can also work with just a CA certificate, so if you got provided with a CA cert but don't have access to the key you can still use this method." + type: "string" + webPki: + description: "Use TLS and the CA certificates trusted by the common web browsers to verify the server. This can be useful when you e.g. use public AWS S3 or other public available services." + type: "object" + type: "object" + required: + - "caCert" + type: "object" + type: "object" + required: + - "verification" + type: "object" + type: "object" + reference: + description: "A reference to an S3Connection resource." + type: "string" + type: "object" + required: + - "metastore" + type: "object" + iceberg: + description: "An [Apache Iceberg](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/iceberg) connector." + properties: + hdfs: + description: "Connection to an HDFS cluster. Please make sure that the underlying Hive metastore also has access to the HDFS." + nullable: true + properties: + configMap: + description: "Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the HDFS cluster." + type: "string" + required: + - "configMap" + type: "object" + metastore: + description: "Mandatory connection to a Hive Metastore, which will be used as a storage for metadata." + properties: + configMap: + description: "Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the Hive metastore." + type: "string" + required: + - "configMap" + type: "object" + s3: + description: "Connection to an S3 store. Please make sure that the underlying Hive metastore also has access to the S3 store. Learn more about S3 configuration in the [S3 concept docs](https://docs.stackable.tech/home/nightly/concepts/s3)." + nullable: true + oneOf: + - required: + - "inline" + - required: + - "reference" + properties: + inline: + description: "Inline definition of an S3 connection." + properties: + accessStyle: + description: "Which access style to use. Defaults to virtual hosted-style as most of the data products out there. Have a look at the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html)." + enum: + - "Path" + - "VirtualHosted" + nullable: true + type: "string" + credentials: + description: "If the S3 uses authentication you have to specify you S3 credentials. In the most cases a [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) providing `accessKey` and `secretKey` is sufficient." + nullable: true + properties: + scope: + description: "[Scope](https://docs.stackable.tech/home/nightly/secret-operator/scope) of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass)." + nullable: true + properties: + node: + default: false + description: "The node scope is resolved to the name of the Kubernetes Node object that the Pod is running on. This will typically be the DNS name of the node." + type: "boolean" + pod: + default: false + description: "The pod scope is resolved to the name of the Kubernetes Pod. This allows the secret to differentiate between StatefulSet replicas." + type: "boolean" + services: + default: [] + description: "The service scope allows Pod objects to specify custom scopes. This should typically correspond to Service objects that the Pod participates in." + items: + type: "string" + type: "array" + type: "object" + secretClass: + description: "[SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) containing the LDAP bind credentials." + type: "string" + required: + - "secretClass" + type: "object" + host: + description: "Hostname of the S3 server without any protocol or port. For example: `west1.my-cloud.com`." + nullable: true + type: "string" + port: + description: "Port the S3 server listens on. If not specified the product will determine the port to use." + format: "uint16" + minimum: 0.0 + nullable: true + type: "integer" + tls: + description: "If you want to use TLS when talking to S3 you can enable TLS encrypted communication with this setting." + nullable: true + properties: + verification: + description: "The verification method used to verify the certificates of the server and/or the client." + oneOf: + - required: + - "none" + - required: + - "server" + properties: + none: + description: "Use TLS but don't verify certificates." + type: "object" + server: + description: "Use TLS and a CA certificate to verify the server." + properties: + caCert: + description: "CA cert to verify the server." + oneOf: + - required: + - "webPki" + - required: + - "secretClass" + properties: + secretClass: + description: "Name of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) which will provide the CA certificate. Note that a SecretClass does not need to have a key but can also work with just a CA certificate, so if you got provided with a CA cert but don't have access to the key you can still use this method." + type: "string" + webPki: + description: "Use TLS and the CA certificates trusted by the common web browsers to verify the server. This can be useful when you e.g. use public AWS S3 or other public available services." + type: "object" + type: "object" + required: + - "caCert" + type: "object" + type: "object" + required: + - "verification" + type: "object" + type: "object" + reference: + description: "A reference to an S3Connection resource." + type: "string" + type: "object" + required: + - "metastore" + type: "object" + tpcds: + description: "A [TPC-DS](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/tpcds) connector." + type: "object" + tpch: + description: "A [TPC-H](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/tpch) connector." + type: "object" + type: "object" + required: + - "connector" + type: "object" + required: + - "spec" + title: "TrinoCatalog" + type: "object" + served: true + storage: true + subresources: {} diff --git a/crd-catalog/stackabletech/trino-operator/trino.stackable.tech/v1alpha1/trinoclusters.ignore b/crd-catalog/stackabletech/trino-operator/trino.stackable.tech/v1alpha1/trinoclusters.ignore new file mode 100644 index 000000000..595614ecd --- /dev/null +++ b/crd-catalog/stackabletech/trino-operator/trino.stackable.tech/v1alpha1/trinoclusters.ignore @@ -0,0 +1 @@ +cannot find type `TrinoClusterWorkersRoleGroupsConfigOverrides` in this scope diff --git a/crd-catalog/stackabletech/trino-operator/trino.stackable.tech/v1alpha1/trinoclusters.yaml b/crd-catalog/stackabletech/trino-operator/trino.stackable.tech/v1alpha1/trinoclusters.yaml new file mode 100644 index 000000000..f122d2eab --- /dev/null +++ b/crd-catalog/stackabletech/trino-operator/trino.stackable.tech/v1alpha1/trinoclusters.yaml @@ -0,0 +1,1299 @@ +apiVersion: "apiextensions.k8s.io/v1" +kind: "CustomResourceDefinition" +metadata: + annotations: + helm.sh/resource-policy: "keep" + name: "trinoclusters.trino.stackable.tech" +spec: + group: "trino.stackable.tech" + names: + categories: [] + kind: "TrinoCluster" + plural: "trinoclusters" + shortNames: + - "trino" + singular: "trinocluster" + scope: "Namespaced" + versions: + - additionalPrinterColumns: [] + name: "v1alpha1" + schema: + openAPIV3Schema: + description: "Auto-generated derived type for TrinoClusterSpec via `CustomResource`" + properties: + spec: + description: "A Trino cluster stacklet. This resource is managed by the Stackable operator for Trino. Find more information on how to use it and the resources that the operator generates in the [operator documentation](https://docs.stackable.tech/home/nightly/trino/)." + properties: + clusterConfig: + description: "Settings that affect all roles and role groups. The settings in the `clusterConfig` are cluster wide settings that do not need to be configurable at role or role group level." + properties: + authentication: + default: [] + description: "Authentication options for Trino. Learn more in the [Trino authentication usage guide](https://docs.stackable.tech/home/nightly/trino/usage-guide/security#authentication)." + items: + properties: + authenticationClass: + description: "A name/key which references an authentication class. To get the concrete [`AuthenticationClass`], we must resolve it. This resolution can be achieved by using [`ClientAuthenticationDetails::resolve_class`]." + type: "string" + oidc: + description: "This field contains authentication provider specific configuration.\n\nUse [`ClientAuthenticationDetails::oidc_or_error`] to get the value or report an error to the user." + nullable: true + properties: + clientCredentialsSecret: + description: "A reference to the OIDC client credentials secret. The secret contains the client id and secret." + type: "string" + extraScopes: + default: [] + description: "An optional list of extra scopes which get merged with the scopes defined in the [`AuthenticationClass`]." + items: + type: "string" + type: "array" + required: + - "clientCredentialsSecret" + type: "object" + required: + - "authenticationClass" + type: "object" + type: "array" + authorization: + description: "Authorization options for Trino. Learn more in the [Trino authorization usage guide](https://docs.stackable.tech/home/nightly/trino/usage-guide/security#authorization)." + nullable: true + properties: + opa: + description: "Configure the OPA stacklet [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) and the name of the Rego package containing your authorization rules. Consult the [OPA authorization documentation](https://docs.stackable.tech/home/nightly/concepts/opa) to learn how to deploy Rego authorization rules with OPA." + nullable: true + properties: + configMapName: + description: "The [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) for the OPA stacklet that should be used for authorization requests." + type: "string" + package: + description: "The name of the Rego package containing the Rego rules for the product." + nullable: true + type: "string" + required: + - "configMapName" + type: "object" + type: "object" + catalogLabelSelector: + description: "[LabelSelector](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors) selecting the Catalogs to include in the Trino instance." + properties: + matchExpressions: + description: "matchExpressions is a list of label selector requirements. The requirements are ANDed." + items: + description: "A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values." + properties: + key: + description: "key is the label key that the selector applies to." + type: "string" + operator: + description: "operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist." + type: "string" + values: + description: "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch." + items: + type: "string" + type: "array" + required: + - "key" + - "operator" + type: "object" + type: "array" + matchLabels: + additionalProperties: + type: "string" + description: "matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed." + type: "object" + type: "object" + listenerClass: + default: "cluster-internal" + description: "This field controls which type of Service the Operator creates for this TrinoCluster:\n\n* cluster-internal: Use a ClusterIP service\n\n* external-unstable: Use a NodePort service\n\n* external-stable: Use a LoadBalancer service\n\nThis is a temporary solution with the goal to keep yaml manifests forward compatible. In the future, this setting will control which [ListenerClass](https://docs.stackable.tech/home/nightly/listener-operator/listenerclass.html) will be used to expose the service, and ListenerClass names will stay the same, allowing for a non-breaking change." + enum: + - "cluster-internal" + - "external-unstable" + - "external-stable" + type: "string" + tls: + default: + internalSecretClass: "tls" + serverSecretClass: "tls" + description: "TLS configuration options for server and internal communication." + properties: + internalSecretClass: + default: "tls" + description: "Only affects internal communication. Use mutual verification between Trino nodes This setting controls: - Which cert the servers should use to authenticate themselves against other servers - Which ca.crt to use when validating the other server" + nullable: true + type: "string" + serverSecretClass: + default: "tls" + description: "Only affects client connections. This setting controls: - If TLS encryption is used at all - Which cert the servers should use to authenticate themselves against the client" + nullable: true + type: "string" + type: "object" + vectorAggregatorConfigMapName: + description: "Name of the Vector aggregator [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery). It must contain the key `ADDRESS` with the address of the Vector aggregator. Follow the [logging tutorial](https://docs.stackable.tech/home/nightly/tutorials/logging-vector-aggregator) to learn how to configure log aggregation with Vector." + nullable: true + type: "string" + required: + - "catalogLabelSelector" + type: "object" + clusterOperation: + default: + reconciliationPaused: false + stopped: false + description: "[Cluster operations](https://docs.stackable.tech/home/nightly/concepts/operations/cluster_operations) properties, allow stopping the product instance as well as pausing reconciliation." + properties: + reconciliationPaused: + default: false + description: "Flag to stop cluster reconciliation by the operator. This means that all changes in the custom resource spec are ignored until this flag is set to false or removed. The operator will however still watch the deployed resources at the time and update the custom resource status field. If applied at the same time with `stopped`, `reconciliationPaused` will take precedence over `stopped` and stop the reconciliation immediately." + type: "boolean" + stopped: + default: false + description: "Flag to stop the cluster. This means all deployed resources (e.g. Services, StatefulSets, ConfigMaps) are kept but all deployed Pods (e.g. replicas from a StatefulSet) are scaled to 0 and therefore stopped and removed. If applied at the same time with `reconciliationPaused`, the latter will pause reconciliation and `stopped` will take no effect until `reconciliationPaused` is set to false or removed." + type: "boolean" + type: "object" + coordinators: + description: "This struct represents a role - e.g. HDFS datanodes or Trino workers. It has a key-value-map containing all the roleGroups that are part of this role. Additionally, there is a `config`, which is configurable at the role *and* roleGroup level. Everything at roleGroup level is merged on top of what is configured on role level. There is also a second form of config, which can only be configured at role level, the `roleConfig`. You can learn more about this in the [Roles and role group concept documentation](https://docs.stackable.tech/home/nightly/concepts/roles-and-role-groups)." + nullable: true + properties: + cliOverrides: + additionalProperties: + type: "string" + default: {} + type: "object" + config: + default: {} + properties: + affinity: + default: + nodeAffinity: null + nodeSelector: null + podAffinity: null + podAntiAffinity: null + description: "These configuration settings control [Pod placement](https://docs.stackable.tech/home/nightly/concepts/operations/pod_placement)." + properties: + nodeAffinity: + description: "Same as the `spec.affinity.nodeAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + nodeSelector: + additionalProperties: + type: "string" + description: "Simple key-value pairs forming a nodeSelector, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + podAffinity: + description: "Same as the `spec.affinity.podAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + podAntiAffinity: + description: "Same as the `spec.affinity.podAntiAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + required: + - "nodeAffinity" + - "podAffinity" + - "podAntiAffinity" + type: "object" + gracefulShutdownTimeout: + description: "Time period Pods have to gracefully shut down, e.g. `30m`, `1h` or `2d`. Consult the operator documentation for details." + nullable: true + type: "string" + logging: + default: + containers: {} + enableVectorAgent: null + description: "Logging configuration, learn more in the [logging concept documentation](https://docs.stackable.tech/home/nightly/concepts/logging)." + properties: + containers: + additionalProperties: + anyOf: + - required: + - "custom" + - {} + description: "Log configuration of the container" + properties: + console: + description: "Configuration for the console appender" + nullable: true + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + custom: + description: "Custom log configuration provided in a ConfigMap" + properties: + configMap: + description: "ConfigMap containing the log configuration files" + nullable: true + type: "string" + type: "object" + file: + description: "Configuration for the file appender" + nullable: true + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + loggers: + additionalProperties: + description: "Configuration of a logger" + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + default: {} + description: "Configuration per logger" + type: "object" + type: "object" + description: "Log configuration per container." + type: "object" + enableVectorAgent: + description: "Wether or not to deploy a container with the Vector log agent." + nullable: true + type: "boolean" + type: "object" + queryMaxMemory: + nullable: true + type: "string" + queryMaxMemoryPerNode: + nullable: true + type: "string" + resources: + default: + cpu: + max: null + min: null + memory: + limit: null + runtimeLimits: {} + storage: + data: + capacity: null + description: "Resource usage is configured here, this includes CPU usage, memory usage and disk storage usage, if this role needs any." + properties: + cpu: + default: + max: null + min: null + properties: + max: + description: "The maximum amount of CPU cores that can be requested by Pods. Equivalent to the `limit` for Pod resource configuration. Cores are specified either as a decimal point number or as milli units. For example:`1.5` will be 1.5 cores, also written as `1500m`." + nullable: true + type: "string" + min: + description: "The minimal amount of CPU cores that Pods need to run. Equivalent to the `request` for Pod resource configuration. Cores are specified either as a decimal point number or as milli units. For example:`1.5` will be 1.5 cores, also written as `1500m`." + nullable: true + type: "string" + type: "object" + memory: + properties: + limit: + description: "The maximum amount of memory that should be available to the Pod. Specified as a byte [Quantity](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/), which means these suffixes are supported: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: `128974848, 129e6, 129M, 128974848000m, 123Mi`" + nullable: true + type: "string" + runtimeLimits: + description: "Additional options that can be specified." + type: "object" + type: "object" + storage: + properties: + data: + default: + capacity: null + properties: + capacity: + description: "Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors.\n\nThe serialization format is:\n\n``` ::= \n\n\t(Note that may be empty, from the \"\" case in .)\n\n ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei\n\n\t(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)\n\n ::= m | \"\" | k | M | G | T | P | E\n\n\t(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)\n\n ::= \"e\" | \"E\" ```\n\nNo matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities.\n\nWhen a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized.\n\nBefore serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that:\n\n- No precision is lost - No fractional digits will be emitted - The exponent (or suffix) is as large as possible.\n\nThe sign will be omitted unless the number is negative.\n\nExamples:\n\n- 1.5 will be serialized as \"1500m\" - 1.5Gi will be serialized as \"1536Mi\"\n\nNote that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise.\n\nNon-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.)\n\nThis format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation." + nullable: true + type: "string" + selectors: + description: "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects." + nullable: true + properties: + matchExpressions: + description: "matchExpressions is a list of label selector requirements. The requirements are ANDed." + items: + description: "A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values." + properties: + key: + description: "key is the label key that the selector applies to." + type: "string" + operator: + description: "operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist." + type: "string" + values: + description: "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch." + items: + type: "string" + type: "array" + required: + - "key" + - "operator" + type: "object" + type: "array" + matchLabels: + additionalProperties: + type: "string" + description: "matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed." + type: "object" + type: "object" + storageClass: + nullable: true + type: "string" + type: "object" + type: "object" + type: "object" + type: "object" + configOverrides: + additionalProperties: + additionalProperties: + type: "string" + type: "object" + default: {} + description: "The `configOverrides` can be used to configure properties in product config files that are not exposed in the CRD. Read the [config overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#config-overrides) and consult the operator specific usage guide documentation for details on the available config files and settings for the specific product." + type: "object" + envOverrides: + additionalProperties: + type: "string" + default: {} + description: "`envOverrides` configure environment variables to be set in the Pods. It is a map from strings to strings - environment variables and the value to set. Read the [environment variable overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#env-overrides) for more information and consult the operator specific usage guide to find out about the product specific environment variables that are available." + type: "object" + podOverrides: + default: {} + description: "In the `podOverrides` property you can define a [PodTemplateSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#podtemplatespec-v1-core) to override any property that can be set on a Kubernetes Pod. Read the [Pod overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#pod-overrides) for more information." + type: "object" + x-kubernetes-preserve-unknown-fields: true + roleConfig: + default: + podDisruptionBudget: + enabled: true + maxUnavailable: null + description: "This is a product-agnostic RoleConfig, which is sufficient for most of the products." + properties: + podDisruptionBudget: + default: + enabled: true + maxUnavailable: null + description: "This struct is used to configure:\n\n1. If PodDisruptionBudgets are created by the operator 2. The allowed number of Pods to be unavailable (`maxUnavailable`)\n\nLearn more in the [allowed Pod disruptions documentation](https://docs.stackable.tech/home/nightly/concepts/operations/pod_disruptions)." + properties: + enabled: + default: true + description: "Whether a PodDisruptionBudget should be written out for this role. Disabling this enables you to specify your own - custom - one. Defaults to true." + type: "boolean" + maxUnavailable: + description: "The number of Pods that are allowed to be down because of voluntary disruptions. If you don't explicitly set this, the operator will use a sane default based upon knowledge about the individual product." + format: "uint16" + minimum: 0.0 + nullable: true + type: "integer" + type: "object" + type: "object" + roleGroups: + additionalProperties: + properties: + cliOverrides: + additionalProperties: + type: "string" + default: {} + type: "object" + config: + default: {} + properties: + affinity: + default: + nodeAffinity: null + nodeSelector: null + podAffinity: null + podAntiAffinity: null + description: "These configuration settings control [Pod placement](https://docs.stackable.tech/home/nightly/concepts/operations/pod_placement)." + properties: + nodeAffinity: + description: "Same as the `spec.affinity.nodeAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + nodeSelector: + additionalProperties: + type: "string" + description: "Simple key-value pairs forming a nodeSelector, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + podAffinity: + description: "Same as the `spec.affinity.podAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + podAntiAffinity: + description: "Same as the `spec.affinity.podAntiAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + required: + - "nodeAffinity" + - "podAffinity" + - "podAntiAffinity" + type: "object" + gracefulShutdownTimeout: + description: "Time period Pods have to gracefully shut down, e.g. `30m`, `1h` or `2d`. Consult the operator documentation for details." + nullable: true + type: "string" + logging: + default: + containers: {} + enableVectorAgent: null + description: "Logging configuration, learn more in the [logging concept documentation](https://docs.stackable.tech/home/nightly/concepts/logging)." + properties: + containers: + additionalProperties: + anyOf: + - required: + - "custom" + - {} + description: "Log configuration of the container" + properties: + console: + description: "Configuration for the console appender" + nullable: true + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + custom: + description: "Custom log configuration provided in a ConfigMap" + properties: + configMap: + description: "ConfigMap containing the log configuration files" + nullable: true + type: "string" + type: "object" + file: + description: "Configuration for the file appender" + nullable: true + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + loggers: + additionalProperties: + description: "Configuration of a logger" + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + default: {} + description: "Configuration per logger" + type: "object" + type: "object" + description: "Log configuration per container." + type: "object" + enableVectorAgent: + description: "Wether or not to deploy a container with the Vector log agent." + nullable: true + type: "boolean" + type: "object" + queryMaxMemory: + nullable: true + type: "string" + queryMaxMemoryPerNode: + nullable: true + type: "string" + resources: + default: + cpu: + max: null + min: null + memory: + limit: null + runtimeLimits: {} + storage: + data: + capacity: null + description: "Resource usage is configured here, this includes CPU usage, memory usage and disk storage usage, if this role needs any." + properties: + cpu: + default: + max: null + min: null + properties: + max: + description: "The maximum amount of CPU cores that can be requested by Pods. Equivalent to the `limit` for Pod resource configuration. Cores are specified either as a decimal point number or as milli units. For example:`1.5` will be 1.5 cores, also written as `1500m`." + nullable: true + type: "string" + min: + description: "The minimal amount of CPU cores that Pods need to run. Equivalent to the `request` for Pod resource configuration. Cores are specified either as a decimal point number or as milli units. For example:`1.5` will be 1.5 cores, also written as `1500m`." + nullable: true + type: "string" + type: "object" + memory: + properties: + limit: + description: "The maximum amount of memory that should be available to the Pod. Specified as a byte [Quantity](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/), which means these suffixes are supported: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: `128974848, 129e6, 129M, 128974848000m, 123Mi`" + nullable: true + type: "string" + runtimeLimits: + description: "Additional options that can be specified." + type: "object" + type: "object" + storage: + properties: + data: + default: + capacity: null + properties: + capacity: + description: "Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors.\n\nThe serialization format is:\n\n``` ::= \n\n\t(Note that may be empty, from the \"\" case in .)\n\n ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei\n\n\t(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)\n\n ::= m | \"\" | k | M | G | T | P | E\n\n\t(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)\n\n ::= \"e\" | \"E\" ```\n\nNo matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities.\n\nWhen a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized.\n\nBefore serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that:\n\n- No precision is lost - No fractional digits will be emitted - The exponent (or suffix) is as large as possible.\n\nThe sign will be omitted unless the number is negative.\n\nExamples:\n\n- 1.5 will be serialized as \"1500m\" - 1.5Gi will be serialized as \"1536Mi\"\n\nNote that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise.\n\nNon-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.)\n\nThis format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation." + nullable: true + type: "string" + selectors: + description: "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects." + nullable: true + properties: + matchExpressions: + description: "matchExpressions is a list of label selector requirements. The requirements are ANDed." + items: + description: "A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values." + properties: + key: + description: "key is the label key that the selector applies to." + type: "string" + operator: + description: "operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist." + type: "string" + values: + description: "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch." + items: + type: "string" + type: "array" + required: + - "key" + - "operator" + type: "object" + type: "array" + matchLabels: + additionalProperties: + type: "string" + description: "matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed." + type: "object" + type: "object" + storageClass: + nullable: true + type: "string" + type: "object" + type: "object" + type: "object" + type: "object" + configOverrides: + additionalProperties: + additionalProperties: + type: "string" + type: "object" + default: {} + description: "The `configOverrides` can be used to configure properties in product config files that are not exposed in the CRD. Read the [config overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#config-overrides) and consult the operator specific usage guide documentation for details on the available config files and settings for the specific product." + type: "object" + envOverrides: + additionalProperties: + type: "string" + default: {} + description: "`envOverrides` configure environment variables to be set in the Pods. It is a map from strings to strings - environment variables and the value to set. Read the [environment variable overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#env-overrides) for more information and consult the operator specific usage guide to find out about the product specific environment variables that are available." + type: "object" + podOverrides: + default: {} + description: "In the `podOverrides` property you can define a [PodTemplateSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#podtemplatespec-v1-core) to override any property that can be set on a Kubernetes Pod. Read the [Pod overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#pod-overrides) for more information." + type: "object" + x-kubernetes-preserve-unknown-fields: true + replicas: + format: "uint16" + minimum: 0.0 + nullable: true + type: "integer" + type: "object" + type: "object" + required: + - "roleGroups" + type: "object" + image: + anyOf: + - required: + - "custom" + - "productVersion" + - required: + - "productVersion" + description: "Specify which image to use, the easiest way is to only configure the `productVersion`. You can also configure a custom image registry to pull from, as well as completely custom images.\n\nConsult the [Product image selection documentation](https://docs.stackable.tech/home/nightly/concepts/product_image_selection) for details." + properties: + custom: + description: "Overwrite the docker image. Specify the full docker image name, e.g. `docker.stackable.tech/stackable/superset:1.4.1-stackable2.1.0`" + type: "string" + productVersion: + description: "Version of the product, e.g. `1.4.1`." + type: "string" + pullPolicy: + default: "Always" + description: "[Pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy) used when pulling the image." + enum: + - "IfNotPresent" + - "Always" + - "Never" + type: "string" + pullSecrets: + description: "[Image pull secrets](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) to pull images from a private registry." + items: + description: "LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace." + properties: + name: + description: "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names" + type: "string" + type: "object" + nullable: true + type: "array" + repo: + description: "Name of the docker repo, e.g. `docker.stackable.tech/stackable`" + nullable: true + type: "string" + stackableVersion: + description: "Stackable version of the product, e.g. `23.4`, `23.4.1` or `0.0.0-dev`. If not specified, the operator will use its own version, e.g. `23.4.1`. When using a nightly operator or a pr version, it will use the nightly `0.0.0-dev` image." + nullable: true + type: "string" + type: "object" + workers: + description: "This struct represents a role - e.g. HDFS datanodes or Trino workers. It has a key-value-map containing all the roleGroups that are part of this role. Additionally, there is a `config`, which is configurable at the role *and* roleGroup level. Everything at roleGroup level is merged on top of what is configured on role level. There is also a second form of config, which can only be configured at role level, the `roleConfig`. You can learn more about this in the [Roles and role group concept documentation](https://docs.stackable.tech/home/nightly/concepts/roles-and-role-groups)." + nullable: true + properties: + cliOverrides: + additionalProperties: + type: "string" + default: {} + type: "object" + config: + default: {} + properties: + affinity: + default: + nodeAffinity: null + nodeSelector: null + podAffinity: null + podAntiAffinity: null + description: "These configuration settings control [Pod placement](https://docs.stackable.tech/home/nightly/concepts/operations/pod_placement)." + properties: + nodeAffinity: + description: "Same as the `spec.affinity.nodeAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + nodeSelector: + additionalProperties: + type: "string" + description: "Simple key-value pairs forming a nodeSelector, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + podAffinity: + description: "Same as the `spec.affinity.podAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + podAntiAffinity: + description: "Same as the `spec.affinity.podAntiAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + required: + - "nodeAffinity" + - "podAffinity" + - "podAntiAffinity" + type: "object" + gracefulShutdownTimeout: + description: "Time period Pods have to gracefully shut down, e.g. `30m`, `1h` or `2d`. Consult the operator documentation for details." + nullable: true + type: "string" + logging: + default: + containers: {} + enableVectorAgent: null + description: "Logging configuration, learn more in the [logging concept documentation](https://docs.stackable.tech/home/nightly/concepts/logging)." + properties: + containers: + additionalProperties: + anyOf: + - required: + - "custom" + - {} + description: "Log configuration of the container" + properties: + console: + description: "Configuration for the console appender" + nullable: true + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + custom: + description: "Custom log configuration provided in a ConfigMap" + properties: + configMap: + description: "ConfigMap containing the log configuration files" + nullable: true + type: "string" + type: "object" + file: + description: "Configuration for the file appender" + nullable: true + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + loggers: + additionalProperties: + description: "Configuration of a logger" + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + default: {} + description: "Configuration per logger" + type: "object" + type: "object" + description: "Log configuration per container." + type: "object" + enableVectorAgent: + description: "Wether or not to deploy a container with the Vector log agent." + nullable: true + type: "boolean" + type: "object" + queryMaxMemory: + nullable: true + type: "string" + queryMaxMemoryPerNode: + nullable: true + type: "string" + resources: + default: + cpu: + max: null + min: null + memory: + limit: null + runtimeLimits: {} + storage: + data: + capacity: null + description: "Resource usage is configured here, this includes CPU usage, memory usage and disk storage usage, if this role needs any." + properties: + cpu: + default: + max: null + min: null + properties: + max: + description: "The maximum amount of CPU cores that can be requested by Pods. Equivalent to the `limit` for Pod resource configuration. Cores are specified either as a decimal point number or as milli units. For example:`1.5` will be 1.5 cores, also written as `1500m`." + nullable: true + type: "string" + min: + description: "The minimal amount of CPU cores that Pods need to run. Equivalent to the `request` for Pod resource configuration. Cores are specified either as a decimal point number or as milli units. For example:`1.5` will be 1.5 cores, also written as `1500m`." + nullable: true + type: "string" + type: "object" + memory: + properties: + limit: + description: "The maximum amount of memory that should be available to the Pod. Specified as a byte [Quantity](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/), which means these suffixes are supported: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: `128974848, 129e6, 129M, 128974848000m, 123Mi`" + nullable: true + type: "string" + runtimeLimits: + description: "Additional options that can be specified." + type: "object" + type: "object" + storage: + properties: + data: + default: + capacity: null + properties: + capacity: + description: "Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors.\n\nThe serialization format is:\n\n``` ::= \n\n\t(Note that may be empty, from the \"\" case in .)\n\n ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei\n\n\t(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)\n\n ::= m | \"\" | k | M | G | T | P | E\n\n\t(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)\n\n ::= \"e\" | \"E\" ```\n\nNo matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities.\n\nWhen a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized.\n\nBefore serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that:\n\n- No precision is lost - No fractional digits will be emitted - The exponent (or suffix) is as large as possible.\n\nThe sign will be omitted unless the number is negative.\n\nExamples:\n\n- 1.5 will be serialized as \"1500m\" - 1.5Gi will be serialized as \"1536Mi\"\n\nNote that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise.\n\nNon-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.)\n\nThis format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation." + nullable: true + type: "string" + selectors: + description: "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects." + nullable: true + properties: + matchExpressions: + description: "matchExpressions is a list of label selector requirements. The requirements are ANDed." + items: + description: "A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values." + properties: + key: + description: "key is the label key that the selector applies to." + type: "string" + operator: + description: "operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist." + type: "string" + values: + description: "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch." + items: + type: "string" + type: "array" + required: + - "key" + - "operator" + type: "object" + type: "array" + matchLabels: + additionalProperties: + type: "string" + description: "matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed." + type: "object" + type: "object" + storageClass: + nullable: true + type: "string" + type: "object" + type: "object" + type: "object" + type: "object" + configOverrides: + additionalProperties: + additionalProperties: + type: "string" + type: "object" + default: {} + description: "The `configOverrides` can be used to configure properties in product config files that are not exposed in the CRD. Read the [config overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#config-overrides) and consult the operator specific usage guide documentation for details on the available config files and settings for the specific product." + type: "object" + envOverrides: + additionalProperties: + type: "string" + default: {} + description: "`envOverrides` configure environment variables to be set in the Pods. It is a map from strings to strings - environment variables and the value to set. Read the [environment variable overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#env-overrides) for more information and consult the operator specific usage guide to find out about the product specific environment variables that are available." + type: "object" + podOverrides: + default: {} + description: "In the `podOverrides` property you can define a [PodTemplateSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#podtemplatespec-v1-core) to override any property that can be set on a Kubernetes Pod. Read the [Pod overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#pod-overrides) for more information." + type: "object" + x-kubernetes-preserve-unknown-fields: true + roleConfig: + default: + podDisruptionBudget: + enabled: true + maxUnavailable: null + description: "This is a product-agnostic RoleConfig, which is sufficient for most of the products." + properties: + podDisruptionBudget: + default: + enabled: true + maxUnavailable: null + description: "This struct is used to configure:\n\n1. If PodDisruptionBudgets are created by the operator 2. The allowed number of Pods to be unavailable (`maxUnavailable`)\n\nLearn more in the [allowed Pod disruptions documentation](https://docs.stackable.tech/home/nightly/concepts/operations/pod_disruptions)." + properties: + enabled: + default: true + description: "Whether a PodDisruptionBudget should be written out for this role. Disabling this enables you to specify your own - custom - one. Defaults to true." + type: "boolean" + maxUnavailable: + description: "The number of Pods that are allowed to be down because of voluntary disruptions. If you don't explicitly set this, the operator will use a sane default based upon knowledge about the individual product." + format: "uint16" + minimum: 0.0 + nullable: true + type: "integer" + type: "object" + type: "object" + roleGroups: + additionalProperties: + properties: + cliOverrides: + additionalProperties: + type: "string" + default: {} + type: "object" + config: + default: {} + properties: + affinity: + default: + nodeAffinity: null + nodeSelector: null + podAffinity: null + podAntiAffinity: null + description: "These configuration settings control [Pod placement](https://docs.stackable.tech/home/nightly/concepts/operations/pod_placement)." + properties: + nodeAffinity: + description: "Same as the `spec.affinity.nodeAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + nodeSelector: + additionalProperties: + type: "string" + description: "Simple key-value pairs forming a nodeSelector, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + podAffinity: + description: "Same as the `spec.affinity.podAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + podAntiAffinity: + description: "Same as the `spec.affinity.podAntiAffinity` field on the Pod, see the [Kubernetes docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node)" + nullable: true + type: "object" + x-kubernetes-preserve-unknown-fields: true + required: + - "nodeAffinity" + - "podAffinity" + - "podAntiAffinity" + type: "object" + gracefulShutdownTimeout: + description: "Time period Pods have to gracefully shut down, e.g. `30m`, `1h` or `2d`. Consult the operator documentation for details." + nullable: true + type: "string" + logging: + default: + containers: {} + enableVectorAgent: null + description: "Logging configuration, learn more in the [logging concept documentation](https://docs.stackable.tech/home/nightly/concepts/logging)." + properties: + containers: + additionalProperties: + anyOf: + - required: + - "custom" + - {} + description: "Log configuration of the container" + properties: + console: + description: "Configuration for the console appender" + nullable: true + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + custom: + description: "Custom log configuration provided in a ConfigMap" + properties: + configMap: + description: "ConfigMap containing the log configuration files" + nullable: true + type: "string" + type: "object" + file: + description: "Configuration for the file appender" + nullable: true + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + loggers: + additionalProperties: + description: "Configuration of a logger" + properties: + level: + description: "The log level threshold. Log events with a lower log level are discarded." + enum: + - "TRACE" + - "DEBUG" + - "INFO" + - "WARN" + - "ERROR" + - "FATAL" + - "NONE" + nullable: true + type: "string" + type: "object" + default: {} + description: "Configuration per logger" + type: "object" + type: "object" + description: "Log configuration per container." + type: "object" + enableVectorAgent: + description: "Wether or not to deploy a container with the Vector log agent." + nullable: true + type: "boolean" + type: "object" + queryMaxMemory: + nullable: true + type: "string" + queryMaxMemoryPerNode: + nullable: true + type: "string" + resources: + default: + cpu: + max: null + min: null + memory: + limit: null + runtimeLimits: {} + storage: + data: + capacity: null + description: "Resource usage is configured here, this includes CPU usage, memory usage and disk storage usage, if this role needs any." + properties: + cpu: + default: + max: null + min: null + properties: + max: + description: "The maximum amount of CPU cores that can be requested by Pods. Equivalent to the `limit` for Pod resource configuration. Cores are specified either as a decimal point number or as milli units. For example:`1.5` will be 1.5 cores, also written as `1500m`." + nullable: true + type: "string" + min: + description: "The minimal amount of CPU cores that Pods need to run. Equivalent to the `request` for Pod resource configuration. Cores are specified either as a decimal point number or as milli units. For example:`1.5` will be 1.5 cores, also written as `1500m`." + nullable: true + type: "string" + type: "object" + memory: + properties: + limit: + description: "The maximum amount of memory that should be available to the Pod. Specified as a byte [Quantity](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/), which means these suffixes are supported: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: `128974848, 129e6, 129M, 128974848000m, 123Mi`" + nullable: true + type: "string" + runtimeLimits: + description: "Additional options that can be specified." + type: "object" + type: "object" + storage: + properties: + data: + default: + capacity: null + properties: + capacity: + description: "Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors.\n\nThe serialization format is:\n\n``` ::= \n\n\t(Note that may be empty, from the \"\" case in .)\n\n ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei\n\n\t(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)\n\n ::= m | \"\" | k | M | G | T | P | E\n\n\t(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)\n\n ::= \"e\" | \"E\" ```\n\nNo matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities.\n\nWhen a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized.\n\nBefore serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that:\n\n- No precision is lost - No fractional digits will be emitted - The exponent (or suffix) is as large as possible.\n\nThe sign will be omitted unless the number is negative.\n\nExamples:\n\n- 1.5 will be serialized as \"1500m\" - 1.5Gi will be serialized as \"1536Mi\"\n\nNote that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise.\n\nNon-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.)\n\nThis format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation." + nullable: true + type: "string" + selectors: + description: "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects." + nullable: true + properties: + matchExpressions: + description: "matchExpressions is a list of label selector requirements. The requirements are ANDed." + items: + description: "A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values." + properties: + key: + description: "key is the label key that the selector applies to." + type: "string" + operator: + description: "operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist." + type: "string" + values: + description: "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch." + items: + type: "string" + type: "array" + required: + - "key" + - "operator" + type: "object" + type: "array" + matchLabels: + additionalProperties: + type: "string" + description: "matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed." + type: "object" + type: "object" + storageClass: + nullable: true + type: "string" + type: "object" + type: "object" + type: "object" + type: "object" + configOverrides: + additionalProperties: + additionalProperties: + type: "string" + type: "object" + default: {} + description: "The `configOverrides` can be used to configure properties in product config files that are not exposed in the CRD. Read the [config overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#config-overrides) and consult the operator specific usage guide documentation for details on the available config files and settings for the specific product." + type: "object" + envOverrides: + additionalProperties: + type: "string" + default: {} + description: "`envOverrides` configure environment variables to be set in the Pods. It is a map from strings to strings - environment variables and the value to set. Read the [environment variable overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#env-overrides) for more information and consult the operator specific usage guide to find out about the product specific environment variables that are available." + type: "object" + podOverrides: + default: {} + description: "In the `podOverrides` property you can define a [PodTemplateSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#podtemplatespec-v1-core) to override any property that can be set on a Kubernetes Pod. Read the [Pod overrides documentation](https://docs.stackable.tech/home/nightly/concepts/overrides#pod-overrides) for more information." + type: "object" + x-kubernetes-preserve-unknown-fields: true + replicas: + format: "uint16" + minimum: 0.0 + nullable: true + type: "integer" + type: "object" + type: "object" + required: + - "roleGroups" + type: "object" + required: + - "clusterConfig" + - "image" + type: "object" + status: + nullable: true + properties: + conditions: + default: [] + items: + properties: + lastTransitionTime: + description: "Last time the condition transitioned from one status to another." + format: "date-time" + nullable: true + type: "string" + lastUpdateTime: + description: "The last time this condition was updated." + format: "date-time" + nullable: true + type: "string" + message: + description: "A human readable message indicating details about the transition." + nullable: true + type: "string" + reason: + description: "The reason for the condition's last transition." + nullable: true + type: "string" + status: + description: "Status of the condition, one of True, False, Unknown." + enum: + - "True" + - "False" + - "Unknown" + type: "string" + type: + description: "Type of deployment condition." + enum: + - "Available" + - "Degraded" + - "Progressing" + - "ReconciliationPaused" + - "Stopped" + type: "string" + required: + - "status" + - "type" + type: "object" + type: "array" + type: "object" + required: + - "spec" + title: "TrinoCluster" + type: "object" + served: true + storage: true + subresources: + status: {} diff --git a/kube-custom-resources-rs/Cargo.toml b/kube-custom-resources-rs/Cargo.toml index 6467a0ec8..62789352f 100644 --- a/kube-custom-resources-rs/Cargo.toml +++ b/kube-custom-resources-rs/Cargo.toml @@ -403,6 +403,7 @@ topolvm_cybozu_com = [] traefik_io = [] training_kubedl_io = [] trident_netapp_io = [] +trino_stackable_tech = [] trust_cert_manager_io = [] upgrade_cattle_io = [] upgrade_managed_openshift_io = [] diff --git a/kube-custom-resources-rs/src/lib.rs b/kube-custom-resources-rs/src/lib.rs index 4d44f055c..451d533d5 100644 --- a/kube-custom-resources-rs/src/lib.rs +++ b/kube-custom-resources-rs/src/lib.rs @@ -3199,6 +3199,11 @@ apiVersion `training.kubedl.io/v1alpha1`: apiVersion `trident.netapp.io/v1`: - `TridentOrchestrator` +## trino_stackable_tech + +apiVersion `trino.stackable.tech/v1alpha1`: +- `TrinoCatalog` + ## trust_cert_manager_io apiVersion `trust.cert-manager.io/v1alpha1`: @@ -4016,6 +4021,8 @@ pub mod traefik_io; pub mod training_kubedl_io; #[cfg(feature = "trident_netapp_io")] pub mod trident_netapp_io; +#[cfg(feature = "trino_stackable_tech")] +pub mod trino_stackable_tech; #[cfg(feature = "trust_cert_manager_io")] pub mod trust_cert_manager_io; #[cfg(feature = "upgrade_cattle_io")] diff --git a/kube-custom-resources-rs/src/trino_stackable_tech/mod.rs b/kube-custom-resources-rs/src/trino_stackable_tech/mod.rs new file mode 100644 index 000000000..32a5a9d4f --- /dev/null +++ b/kube-custom-resources-rs/src/trino_stackable_tech/mod.rs @@ -0,0 +1 @@ +pub mod v1alpha1; diff --git a/kube-custom-resources-rs/src/trino_stackable_tech/v1alpha1/mod.rs b/kube-custom-resources-rs/src/trino_stackable_tech/v1alpha1/mod.rs new file mode 100644 index 000000000..27668dee6 --- /dev/null +++ b/kube-custom-resources-rs/src/trino_stackable_tech/v1alpha1/mod.rs @@ -0,0 +1 @@ +pub mod trinocatalogs; diff --git a/kube-custom-resources-rs/src/trino_stackable_tech/v1alpha1/trinocatalogs.rs b/kube-custom-resources-rs/src/trino_stackable_tech/v1alpha1/trinocatalogs.rs new file mode 100644 index 000000000..6a481b43b --- /dev/null +++ b/kube-custom-resources-rs/src/trino_stackable_tech/v1alpha1/trinocatalogs.rs @@ -0,0 +1,563 @@ +// WARNING: generated by kopium - manual changes will be overwritten +// kopium command: kopium --docs --filename=./crd-catalog/stackabletech/trino-operator/trino.stackable.tech/v1alpha1/trinocatalogs.yaml --derive=Default --derive=PartialEq --smart-derive-elision +// kopium version: 0.21.1 + +#[allow(unused_imports)] +mod prelude { + pub use kube::CustomResource; + pub use serde::{Serialize, Deserialize}; + pub use std::collections::BTreeMap; +} +use self::prelude::*; + +/// The TrinoCatalog resource can be used to define catalogs in Kubernetes objects. Read more about it in the [Trino operator concept docs](https://docs.stackable.tech/home/nightly/trino/concepts) and the [Trino operator usage guide](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/). The documentation also contains a list of all the supported backends. +#[derive(CustomResource, Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +#[kube(group = "trino.stackable.tech", version = "v1alpha1", kind = "TrinoCatalog", plural = "trinocatalogs")] +#[kube(namespaced)] +#[kube(schema = "disabled")] +#[kube(derive="Default")] +#[kube(derive="PartialEq")] +pub struct TrinoCatalogSpec { + /// The `configOverrides` allow overriding arbitrary Trino settings. For example, for Hive you could add `hive.metastore.username: trino`. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "configOverrides")] + pub config_overrides: Option>, + /// The `connector` defines which connector is used. + pub connector: TrinoCatalogConnector, +} + +/// The `connector` defines which connector is used. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnector { + /// A [Black Hole](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/black-hole) connector. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "blackHole")] + pub black_hole: Option, + /// An [Delta Lake](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/delta-lake) connector. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "deltaLake")] + pub delta_lake: Option, + /// A [generic](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/generic) connector. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub generic: Option, + /// A [Google sheets](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/google-sheets) connector. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "googleSheet")] + pub google_sheet: Option, + /// An [Apache Hive](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/hive) connector. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub hive: Option, + /// An [Apache Iceberg](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/iceberg) connector. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub iceberg: Option, + /// A [TPC-DS](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/tpcds) connector. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub tpcds: Option, + /// A [TPC-H](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/tpch) connector. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub tpch: Option, +} + +/// A [Black Hole](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/black-hole) connector. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorBlackHole { +} + +/// An [Delta Lake](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/delta-lake) connector. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLake { + /// Connection to an HDFS cluster. Please make sure that the underlying Hive metastore also has access to the HDFS. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub hdfs: Option, + /// Mandatory connection to a Hive Metastore, which will be used as a storage for metadata. + pub metastore: TrinoCatalogConnectorDeltaLakeMetastore, + /// Connection to an S3 store. Please make sure that the underlying Hive metastore also has access to the S3 store. Learn more about S3 configuration in the [S3 concept docs](https://docs.stackable.tech/home/nightly/concepts/s3). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub s3: Option, +} + +/// Connection to an HDFS cluster. Please make sure that the underlying Hive metastore also has access to the HDFS. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeHdfs { + /// Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the HDFS cluster. + #[serde(rename = "configMap")] + pub config_map: String, +} + +/// Mandatory connection to a Hive Metastore, which will be used as a storage for metadata. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeMetastore { + /// Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the Hive metastore. + #[serde(rename = "configMap")] + pub config_map: String, +} + +/// Connection to an S3 store. Please make sure that the underlying Hive metastore also has access to the S3 store. Learn more about S3 configuration in the [S3 concept docs](https://docs.stackable.tech/home/nightly/concepts/s3). +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeS3 { + /// Inline definition of an S3 connection. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub inline: Option, + /// A reference to an S3Connection resource. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub reference: Option, +} + +/// Inline definition of an S3 connection. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeS3Inline { + /// Which access style to use. Defaults to virtual hosted-style as most of the data products out there. Have a look at the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). + #[serde(default, skip_serializing_if = "Option::is_none", rename = "accessStyle")] + pub access_style: Option, + /// If the S3 uses authentication you have to specify you S3 credentials. In the most cases a [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) providing `accessKey` and `secretKey` is sufficient. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub credentials: Option, + /// Hostname of the S3 server without any protocol or port. For example: `west1.my-cloud.com`. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub host: Option, + /// Port the S3 server listens on. If not specified the product will determine the port to use. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub port: Option, + /// If you want to use TLS when talking to S3 you can enable TLS encrypted communication with this setting. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub tls: Option, +} + +/// Inline definition of an S3 connection. +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] +pub enum TrinoCatalogConnectorDeltaLakeS3InlineAccessStyle { + Path, + VirtualHosted, +} + +/// If the S3 uses authentication you have to specify you S3 credentials. In the most cases a [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) providing `accessKey` and `secretKey` is sufficient. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeS3InlineCredentials { + /// [Scope](https://docs.stackable.tech/home/nightly/secret-operator/scope) of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub scope: Option, + /// [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) containing the LDAP bind credentials. + #[serde(rename = "secretClass")] + pub secret_class: String, +} + +/// [Scope](https://docs.stackable.tech/home/nightly/secret-operator/scope) of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass). +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeS3InlineCredentialsScope { + /// The node scope is resolved to the name of the Kubernetes Node object that the Pod is running on. This will typically be the DNS name of the node. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub node: Option, + /// The pod scope is resolved to the name of the Kubernetes Pod. This allows the secret to differentiate between StatefulSet replicas. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub pod: Option, + /// The service scope allows Pod objects to specify custom scopes. This should typically correspond to Service objects that the Pod participates in. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub services: Option>, +} + +/// If you want to use TLS when talking to S3 you can enable TLS encrypted communication with this setting. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeS3InlineTls { + /// The verification method used to verify the certificates of the server and/or the client. + pub verification: TrinoCatalogConnectorDeltaLakeS3InlineTlsVerification, +} + +/// The verification method used to verify the certificates of the server and/or the client. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeS3InlineTlsVerification { + /// Use TLS but don't verify certificates. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub none: Option, + /// Use TLS and a CA certificate to verify the server. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub server: Option, +} + +/// Use TLS but don't verify certificates. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeS3InlineTlsVerificationNone { +} + +/// Use TLS and a CA certificate to verify the server. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeS3InlineTlsVerificationServer { + /// CA cert to verify the server. + #[serde(rename = "caCert")] + pub ca_cert: TrinoCatalogConnectorDeltaLakeS3InlineTlsVerificationServerCaCert, +} + +/// CA cert to verify the server. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeS3InlineTlsVerificationServerCaCert { + /// Name of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) which will provide the CA certificate. Note that a SecretClass does not need to have a key but can also work with just a CA certificate, so if you got provided with a CA cert but don't have access to the key you can still use this method. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "secretClass")] + pub secret_class: Option, + /// Use TLS and the CA certificates trusted by the common web browsers to verify the server. This can be useful when you e.g. use public AWS S3 or other public available services. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "webPki")] + pub web_pki: Option, +} + +/// Use TLS and the CA certificates trusted by the common web browsers to verify the server. This can be useful when you e.g. use public AWS S3 or other public available services. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorDeltaLakeS3InlineTlsVerificationServerCaCertWebPki { +} + +/// A [generic](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/generic) connector. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorGeneric { + /// Name of the Trino connector. Will be passed to `connector.name`. + #[serde(rename = "connectorName")] + pub connector_name: String, + /// A map of properties to put in the connector configuration file. They can be specified either as a raw value or be read from a Secret or ConfigMap. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub properties: Option>, +} + +/// A map of properties to put in the connector configuration file. They can be specified either as a raw value or be read from a Secret or ConfigMap. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorGenericProperties { + #[serde(default, skip_serializing_if = "Option::is_none")] + pub value: Option, + /// Selects a key from a ConfigMap. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "valueFromConfigMap")] + pub value_from_config_map: Option, + /// SecretKeySelector selects a key of a Secret. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "valueFromSecret")] + pub value_from_secret: Option, +} + +/// Selects a key from a ConfigMap. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorGenericPropertiesValueFromConfigMap { + /// The key to select. + pub key: String, + /// Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + #[serde(default, skip_serializing_if = "Option::is_none")] + pub name: Option, + /// Specify whether the ConfigMap or its key must be defined + #[serde(default, skip_serializing_if = "Option::is_none")] + pub optional: Option, +} + +/// SecretKeySelector selects a key of a Secret. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorGenericPropertiesValueFromSecret { + /// The key of the secret to select from. Must be a valid secret key. + pub key: String, + /// Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + #[serde(default, skip_serializing_if = "Option::is_none")] + pub name: Option, + /// Specify whether the Secret or its key must be defined + #[serde(default, skip_serializing_if = "Option::is_none")] + pub optional: Option, +} + +/// A [Google sheets](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/google-sheets) connector. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorGoogleSheet { + /// Cache the contents of sheets. This is used to reduce Google Sheets API usage and latency. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub cache: Option, + /// The Secret containing the Google API JSON key file. The key used from the Secret is `credentials`. + #[serde(rename = "credentialsSecret")] + pub credentials_secret: String, + /// Sheet ID of the spreadsheet, that contains the table mapping. + #[serde(rename = "metadataSheetId")] + pub metadata_sheet_id: String, +} + +/// Cache the contents of sheets. This is used to reduce Google Sheets API usage and latency. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorGoogleSheetCache { + /// How long to cache spreadsheet data or metadata, defaults to `5m`. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "sheetsDataExpireAfterWrite")] + pub sheets_data_expire_after_write: Option, + /// Maximum number of spreadsheets to cache, defaults to 1000. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "sheetsDataMaxCacheSize")] + pub sheets_data_max_cache_size: Option, +} + +/// An [Apache Hive](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/hive) connector. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHive { + /// Connection to an HDFS cluster. Please make sure that the underlying Hive metastore also has access to the HDFS. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub hdfs: Option, + /// Mandatory connection to a Hive Metastore, which will be used as a storage for metadata. + pub metastore: TrinoCatalogConnectorHiveMetastore, + /// Connection to an S3 store. Please make sure that the underlying Hive metastore also has access to the S3 store. Learn more about S3 configuration in the [S3 concept docs](https://docs.stackable.tech/home/nightly/concepts/s3). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub s3: Option, +} + +/// Connection to an HDFS cluster. Please make sure that the underlying Hive metastore also has access to the HDFS. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveHdfs { + /// Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the HDFS cluster. + #[serde(rename = "configMap")] + pub config_map: String, +} + +/// Mandatory connection to a Hive Metastore, which will be used as a storage for metadata. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveMetastore { + /// Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the Hive metastore. + #[serde(rename = "configMap")] + pub config_map: String, +} + +/// Connection to an S3 store. Please make sure that the underlying Hive metastore also has access to the S3 store. Learn more about S3 configuration in the [S3 concept docs](https://docs.stackable.tech/home/nightly/concepts/s3). +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveS3 { + /// Inline definition of an S3 connection. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub inline: Option, + /// A reference to an S3Connection resource. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub reference: Option, +} + +/// Inline definition of an S3 connection. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveS3Inline { + /// Which access style to use. Defaults to virtual hosted-style as most of the data products out there. Have a look at the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). + #[serde(default, skip_serializing_if = "Option::is_none", rename = "accessStyle")] + pub access_style: Option, + /// If the S3 uses authentication you have to specify you S3 credentials. In the most cases a [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) providing `accessKey` and `secretKey` is sufficient. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub credentials: Option, + /// Hostname of the S3 server without any protocol or port. For example: `west1.my-cloud.com`. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub host: Option, + /// Port the S3 server listens on. If not specified the product will determine the port to use. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub port: Option, + /// If you want to use TLS when talking to S3 you can enable TLS encrypted communication with this setting. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub tls: Option, +} + +/// Inline definition of an S3 connection. +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] +pub enum TrinoCatalogConnectorHiveS3InlineAccessStyle { + Path, + VirtualHosted, +} + +/// If the S3 uses authentication you have to specify you S3 credentials. In the most cases a [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) providing `accessKey` and `secretKey` is sufficient. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveS3InlineCredentials { + /// [Scope](https://docs.stackable.tech/home/nightly/secret-operator/scope) of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub scope: Option, + /// [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) containing the LDAP bind credentials. + #[serde(rename = "secretClass")] + pub secret_class: String, +} + +/// [Scope](https://docs.stackable.tech/home/nightly/secret-operator/scope) of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass). +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveS3InlineCredentialsScope { + /// The node scope is resolved to the name of the Kubernetes Node object that the Pod is running on. This will typically be the DNS name of the node. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub node: Option, + /// The pod scope is resolved to the name of the Kubernetes Pod. This allows the secret to differentiate between StatefulSet replicas. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub pod: Option, + /// The service scope allows Pod objects to specify custom scopes. This should typically correspond to Service objects that the Pod participates in. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub services: Option>, +} + +/// If you want to use TLS when talking to S3 you can enable TLS encrypted communication with this setting. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveS3InlineTls { + /// The verification method used to verify the certificates of the server and/or the client. + pub verification: TrinoCatalogConnectorHiveS3InlineTlsVerification, +} + +/// The verification method used to verify the certificates of the server and/or the client. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveS3InlineTlsVerification { + /// Use TLS but don't verify certificates. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub none: Option, + /// Use TLS and a CA certificate to verify the server. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub server: Option, +} + +/// Use TLS but don't verify certificates. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveS3InlineTlsVerificationNone { +} + +/// Use TLS and a CA certificate to verify the server. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveS3InlineTlsVerificationServer { + /// CA cert to verify the server. + #[serde(rename = "caCert")] + pub ca_cert: TrinoCatalogConnectorHiveS3InlineTlsVerificationServerCaCert, +} + +/// CA cert to verify the server. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveS3InlineTlsVerificationServerCaCert { + /// Name of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) which will provide the CA certificate. Note that a SecretClass does not need to have a key but can also work with just a CA certificate, so if you got provided with a CA cert but don't have access to the key you can still use this method. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "secretClass")] + pub secret_class: Option, + /// Use TLS and the CA certificates trusted by the common web browsers to verify the server. This can be useful when you e.g. use public AWS S3 or other public available services. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "webPki")] + pub web_pki: Option, +} + +/// Use TLS and the CA certificates trusted by the common web browsers to verify the server. This can be useful when you e.g. use public AWS S3 or other public available services. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorHiveS3InlineTlsVerificationServerCaCertWebPki { +} + +/// An [Apache Iceberg](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/iceberg) connector. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIceberg { + /// Connection to an HDFS cluster. Please make sure that the underlying Hive metastore also has access to the HDFS. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub hdfs: Option, + /// Mandatory connection to a Hive Metastore, which will be used as a storage for metadata. + pub metastore: TrinoCatalogConnectorIcebergMetastore, + /// Connection to an S3 store. Please make sure that the underlying Hive metastore also has access to the S3 store. Learn more about S3 configuration in the [S3 concept docs](https://docs.stackable.tech/home/nightly/concepts/s3). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub s3: Option, +} + +/// Connection to an HDFS cluster. Please make sure that the underlying Hive metastore also has access to the HDFS. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergHdfs { + /// Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the HDFS cluster. + #[serde(rename = "configMap")] + pub config_map: String, +} + +/// Mandatory connection to a Hive Metastore, which will be used as a storage for metadata. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergMetastore { + /// Name of the [discovery ConfigMap](https://docs.stackable.tech/home/nightly/concepts/service_discovery) providing information about the Hive metastore. + #[serde(rename = "configMap")] + pub config_map: String, +} + +/// Connection to an S3 store. Please make sure that the underlying Hive metastore also has access to the S3 store. Learn more about S3 configuration in the [S3 concept docs](https://docs.stackable.tech/home/nightly/concepts/s3). +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergS3 { + /// Inline definition of an S3 connection. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub inline: Option, + /// A reference to an S3Connection resource. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub reference: Option, +} + +/// Inline definition of an S3 connection. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergS3Inline { + /// Which access style to use. Defaults to virtual hosted-style as most of the data products out there. Have a look at the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). + #[serde(default, skip_serializing_if = "Option::is_none", rename = "accessStyle")] + pub access_style: Option, + /// If the S3 uses authentication you have to specify you S3 credentials. In the most cases a [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) providing `accessKey` and `secretKey` is sufficient. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub credentials: Option, + /// Hostname of the S3 server without any protocol or port. For example: `west1.my-cloud.com`. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub host: Option, + /// Port the S3 server listens on. If not specified the product will determine the port to use. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub port: Option, + /// If you want to use TLS when talking to S3 you can enable TLS encrypted communication with this setting. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub tls: Option, +} + +/// Inline definition of an S3 connection. +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] +pub enum TrinoCatalogConnectorIcebergS3InlineAccessStyle { + Path, + VirtualHosted, +} + +/// If the S3 uses authentication you have to specify you S3 credentials. In the most cases a [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) providing `accessKey` and `secretKey` is sufficient. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergS3InlineCredentials { + /// [Scope](https://docs.stackable.tech/home/nightly/secret-operator/scope) of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub scope: Option, + /// [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) containing the LDAP bind credentials. + #[serde(rename = "secretClass")] + pub secret_class: String, +} + +/// [Scope](https://docs.stackable.tech/home/nightly/secret-operator/scope) of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass). +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergS3InlineCredentialsScope { + /// The node scope is resolved to the name of the Kubernetes Node object that the Pod is running on. This will typically be the DNS name of the node. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub node: Option, + /// The pod scope is resolved to the name of the Kubernetes Pod. This allows the secret to differentiate between StatefulSet replicas. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub pod: Option, + /// The service scope allows Pod objects to specify custom scopes. This should typically correspond to Service objects that the Pod participates in. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub services: Option>, +} + +/// If you want to use TLS when talking to S3 you can enable TLS encrypted communication with this setting. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergS3InlineTls { + /// The verification method used to verify the certificates of the server and/or the client. + pub verification: TrinoCatalogConnectorIcebergS3InlineTlsVerification, +} + +/// The verification method used to verify the certificates of the server and/or the client. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergS3InlineTlsVerification { + /// Use TLS but don't verify certificates. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub none: Option, + /// Use TLS and a CA certificate to verify the server. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub server: Option, +} + +/// Use TLS but don't verify certificates. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergS3InlineTlsVerificationNone { +} + +/// Use TLS and a CA certificate to verify the server. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergS3InlineTlsVerificationServer { + /// CA cert to verify the server. + #[serde(rename = "caCert")] + pub ca_cert: TrinoCatalogConnectorIcebergS3InlineTlsVerificationServerCaCert, +} + +/// CA cert to verify the server. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergS3InlineTlsVerificationServerCaCert { + /// Name of the [SecretClass](https://docs.stackable.tech/home/nightly/secret-operator/secretclass) which will provide the CA certificate. Note that a SecretClass does not need to have a key but can also work with just a CA certificate, so if you got provided with a CA cert but don't have access to the key you can still use this method. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "secretClass")] + pub secret_class: Option, + /// Use TLS and the CA certificates trusted by the common web browsers to verify the server. This can be useful when you e.g. use public AWS S3 or other public available services. + #[serde(default, skip_serializing_if = "Option::is_none", rename = "webPki")] + pub web_pki: Option, +} + +/// Use TLS and the CA certificates trusted by the common web browsers to verify the server. This can be useful when you e.g. use public AWS S3 or other public available services. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorIcebergS3InlineTlsVerificationServerCaCertWebPki { +} + +/// A [TPC-DS](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/tpcds) connector. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorTpcds { +} + +/// A [TPC-H](https://docs.stackable.tech/home/nightly/trino/usage-guide/catalogs/tpch) connector. +#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)] +pub struct TrinoCatalogConnectorTpch { +} +