From db675f6a6fd1cad0a33221694209ca5f74fb32fe Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Andreas=20Lindh=C3=A9?= Date: Thu, 26 Oct 2023 10:36:08 +0200 Subject: [PATCH 01/65] Clarify "Linux dependencies" for vSphere --- .../vsphere/create-a-vm-template.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index c8a6da86cf2f..9b48cde09665 100644 --- a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -24,6 +24,8 @@ If you have any specific firewall rules or configuration, you will need to add t ## Linux Dependencies The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; some distributions ship these by default, for example. +These dependencies are souly what is required for Rancher's cluster provisioner to work. +Additional dependencies required by Kubernetes will be installed automatically by the cluster provisioner. * curl * wget From 01c0d1503ce42441a785c83fb905127526c17e24 Mon Sep 17 00:00:00 2001 From: pdellamore Date: Thu, 26 Oct 2023 20:05:22 -0300 Subject: [PATCH 02/65] Add note regarding rancher pentest reports public availability (#961) * Add note regarding rancher pentest reports public availability This PR will add a note regarding third-party penetration test reports public disclosure. * Update docs/pages-for-subheaders/rancher-security.md * versioning for 2.7, 2.8 * added back in webhook material at end of 2.8 page * corrected broken link --------- Co-authored-by: Pietro Dell'Amore Co-authored-by: Marty Hernandez Avedon --- docs/pages-for-subheaders/rancher-security.md | 4 +++- .../version-2.7/pages-for-subheaders/rancher-security.md | 4 +++- .../version-2.8/pages-for-subheaders/rancher-security.md | 6 ++++-- 3 files changed, 10 insertions(+), 4 deletions(-) diff --git a/docs/pages-for-subheaders/rancher-security.md b/docs/pages-for-subheaders/rancher-security.md index b5733b69cc8b..67c496fe24dc 100644 --- a/docs/pages-for-subheaders/rancher-security.md +++ b/docs/pages-for-subheaders/rancher-security.md @@ -73,13 +73,15 @@ Each version of Rancher's self-assessment guide corresponds to specific versions ### Third-party Penetration Test Reports -Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher 2.x software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. +Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. Results: - [Cure53 Pen Test - July 2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf) - [Untamed Theory Pen Test - March 2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf) +Please note that new reports are no longer shared or made publicly available. + ### Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](../reference-guides/rancher-security/security-advisories-and-cves.md) diff --git a/versioned_docs/version-2.7/pages-for-subheaders/rancher-security.md b/versioned_docs/version-2.7/pages-for-subheaders/rancher-security.md index b5733b69cc8b..67c496fe24dc 100644 --- a/versioned_docs/version-2.7/pages-for-subheaders/rancher-security.md +++ b/versioned_docs/version-2.7/pages-for-subheaders/rancher-security.md @@ -73,13 +73,15 @@ Each version of Rancher's self-assessment guide corresponds to specific versions ### Third-party Penetration Test Reports -Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher 2.x software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. +Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. Results: - [Cure53 Pen Test - July 2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf) - [Untamed Theory Pen Test - March 2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf) +Please note that new reports are no longer shared or made publicly available. + ### Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](../reference-guides/rancher-security/security-advisories-and-cves.md) diff --git a/versioned_docs/version-2.8/pages-for-subheaders/rancher-security.md b/versioned_docs/version-2.8/pages-for-subheaders/rancher-security.md index 2b901741a5bd..b03d7c1da302 100644 --- a/versioned_docs/version-2.8/pages-for-subheaders/rancher-security.md +++ b/versioned_docs/version-2.8/pages-for-subheaders/rancher-security.md @@ -73,13 +73,15 @@ Each version of Rancher's self-assessment guide corresponds to specific versions ### Third-party Penetration Test Reports -Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher 2.x software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. +Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. Results: - [Cure53 Pen Test - July 2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf) - [Untamed Theory Pen Test - March 2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf) +Please note that new reports are no longer shared or made publicly available. + ### Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](../reference-guides/rancher-security/security-advisories-and-cves.md) @@ -94,4 +96,4 @@ For recommendations on securing your Rancher Manager deployments, refer to the [ ### Rancher Webhook Hardening -The Rancher webhook deploys on both the upstream Rancher cluster and all provisioned clusters. For recommendations on hardening the Rancher webhook, see the [Hardening the Rancher Webhook](../reference-guides/rancher-security/rancher-webhook-hardening.md) guide. +The Rancher webhook deploys on both the upstream Rancher cluster and all provisioned clusters. For recommendations on hardening the Rancher webhook, see the [Hardening the Rancher Webhook](../reference-guides/rancher-security/rancher-webhook-hardening.md) guide. \ No newline at end of file From 22dbffc30a59c584eabf2fa38ee865b5af6c296f Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Fri, 27 Oct 2023 10:19:38 -0700 Subject: [PATCH 03/65] Add redirects for dashboard links --- docusaurus.config.js | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/docusaurus.config.js b/docusaurus.config.js index a81769958355..0cba49a74d78 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -192,6 +192,18 @@ module.exports = { { fromExtensions: ['html', 'htm'], redirects: [ + { // Redirects for dashboard#9970 + to: '/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/rke1-vs-rke2-differences', + from: '/v2.8/cluster-provisioning/rke-clusters/behavior-differences-between-rke1-and-rke2/' + }, + { + to: '/v2.7/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/rke1-vs-rke2-differences', + from: '/v2.7/cluster-provisioning/rke-clusters/behavior-differences-between-rke1-and-rke2/' + }, + { + to: '/v2.6/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/rke1-vs-rke2-differences', + from: '/v2.6/cluster-provisioning/rke-clusters/behavior-differences-between-rke1-and-rke2/' + }, // Redirects for dashboard#9970 (end) { // Redirects for restructure from PR #234 (start) to: '/faq/general-faq', from: '/faq' From 5a0e3c4ffa048b3236fe1a40ac619c3480ac0535 Mon Sep 17 00:00:00 2001 From: Michael Bolot Date: Fri, 27 Oct 2023 17:07:45 -0500 Subject: [PATCH 04/65] Updates to the Global roles for new 2.8 features (#898) * Updating docs to deprecate Restricted Admin * Updating GlobalRole Docs Updates the docs for GlobalRoles to include new info on the "escalate" and "bind" verbs, as well as include info on how to use the new "inheritedClusterRoles" field * Apply suggestions from code review * code review changes * merged some minor copyedits * Update docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md * Update docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md * fixed versioning for v2.8, v2.7 * Update docs/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md * copy/paste current version of global-permissions file in main to ensure file is correctly reverted to v2.7 --------- Co-authored-by: Marty Hernandez Avedon --- .../helm-chart-options.md | 4 +- .../helm-chart-options.md | 6 +- .../global-permissions.md | 271 +++++++++++++----- 3 files changed, 199 insertions(+), 82 deletions(-) diff --git a/docs/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md b/docs/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md index b6cd651056a9..621119862c80 100644 --- a/docs/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md +++ b/docs/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md @@ -31,7 +31,7 @@ For information on enabling experimental features, refer to [this page.](../../. | Option | Default Value | Description | | ------------------------------ | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | | `additionalTrustedCAs` | false | `bool` - See [Additional Trusted CAs](#additional-trusted-cas) | -| `addLocal` | "true" | `string` - Have Rancher detect and import the "local" Rancher server cluster. _Note: This option is no longer available in v2.5.0. Consider using the `restrictedAdmin` option to prevent users from modifying the local cluster._ | +| `addLocal` | "true" | `string` - Have Rancher detect and import the "local" (upstream) Rancher server cluster. _Note: This option is no longer available in v2.5.0. Consider using the `restrictedAdmin` option to prevent users from modifying the local cluster._ | | `antiAffinity` | "preferred" | `string` - AntiAffinity rule for Rancher pods - "preferred, required" | | `auditLog.destination` | "sidecar" | `string` - Stream to sidecar container console or hostPath volume - "sidecar, hostPath" | | `auditLog.hostPath` | "/var/log/rancher/audit" | `string` - log file destination on host (only applies when `auditLog.destination` is set to `hostPath`) | @@ -58,7 +58,7 @@ For information on enabling experimental features, refer to [this page.](../../. | `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag | | `replicas` | 3 | `int` - Number of Rancher server replicas. Setting to -1 will dynamically choose 1, 2, or 3 based on the number of available nodes in the cluster. | | `resources` | {} | `map` - rancher pod resource requests & limits | -| `restrictedAdmin` | `false` | `bool` - When this option is set to true, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role.](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md#restricted-admin) | +| `restrictedAdmin` | `false` | `bool` - When this option is set to `true`, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md#restricted-admin). | | `systemDefaultRegistry` | "" | `string` - private registry to be used for all system container images, e.g., http://registry.example.com/ | | `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" | | `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. | diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md index 6e0f7263d1ca..c1602120ff3e 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md @@ -31,7 +31,7 @@ For information on enabling experimental features, refer to [this page.](../../. | Option | Default Value | Description | | ------------------------------ | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | | `additionalTrustedCAs` | false | `bool` - See [Additional Trusted CAs](#additional-trusted-cas) | -| `addLocal` | "true" | `string` - Have Rancher detect and import the "local" Rancher server cluster. _Note: This option is no longer available in v2.5.0. Consider using the `restrictedAdmin` option to prevent users from modifying the local cluster._ | +| `addLocal` | "true" | `string` - Have Rancher detect and import the "local" (upstream) Rancher server cluster. _Note: This option is no longer available in v2.5.0. | | `antiAffinity` | "preferred" | `string` - AntiAffinity rule for Rancher pods - "preferred, required" | | `auditLog.destination` | "sidecar" | `string` - Stream to sidecar container console or hostPath volume - "sidecar, hostPath" | | `auditLog.hostPath` | "/var/log/rancher/audit" | `string` - log file destination on host (only applies when `auditLog.destination` is set to `hostPath`) | @@ -58,11 +58,11 @@ For information on enabling experimental features, refer to [this page.](../../. | `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag | | `replicas` | 3 | `int` - Number of Rancher server replicas. Setting to -1 will dynamically choose 1, 2, or 3 based on the number of available nodes in the cluster. | | `resources` | {} | `map` - rancher pod resource requests & limits | -| `restrictedAdmin` | `false` | `bool` - When this option is set to true, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role.](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md#restricted-admin) | +| `restrictedAdmin` | `false` | `bool` - When this option is set to `true`, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role.](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md#restricted-admin) _Note: this option is deprecated, and may be removed in v2.10.0 or later. | | `systemDefaultRegistry` | "" | `string` - private registry to be used for all system container images, e.g., http://registry.example.com/ | | `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" | | `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. | -| `global.cattle.psp.enabled` | `true` | `bool` - select 'false' to disable PSPs for Kubernetes v1.25 and above when using Rancher v2.7.2-v2.7.4. When using Rancher v2.7.5 and above, Rancher attempts to detect if a cluster is running a Kubernetes version where PSPs are not supported, and will default it's usage of PSPs to false if it can determine that PSPs are not supported in the cluster. Users can still manually override this by explicitly providing `true` or `false` for this value. Rancher will still use PSPs by default in clusters which support PSPs (such as clusters running Kubernetes v1.24 or lower).| +| `global.cattle.psp.enabled` | `true` | `bool` - select 'false' to disable PSPs for Kubernetes v1.25 and above when using Rancher v2.7.2-v2.7.4. When using Rancher v2.7.5 and above, Rancher attempts to detect if a cluster is running a Kubernetes version where PSPs are not supported, and will default it's usage of PSPs to false if it can determine that PSPs are not supported in the cluster. Users can still manually override this by explicitly providing `true` or `false` for this value. Rancher will still use PSPs by default in clusters which support PSPs (such as clusters running Kubernetes v1.24 or lower). | ### Bootstrap Password diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md index 64411298a622..9e18ce4b88ce 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md @@ -12,7 +12,7 @@ Global Permissions define user authorization outside the scope of any particular - **Administrator:** These users have full control over the entire Rancher system and all clusters within it. -- **Restricted Admin:** These users have full control over downstream clusters, but cannot alter the local Kubernetes cluster. +- **Restricted Admin (Deprecated) :** These users have full control over downstream clusters, but cannot alter the local Kubernetes cluster. - **Standard User:** These users can create new clusters and use them. Standard users can also assign other users permissions to their clusters. @@ -20,77 +20,6 @@ Global Permissions define user authorization outside the scope of any particular You cannot update or delete the built-in Global Permissions. -## Restricted Admin - -A new `restricted-admin` role was created in Rancher v2.5 in order to prevent privilege escalation from the local Rancher server Kubernetes cluster. This role has full administrator access to all downstream clusters managed by Rancher, but it does not have permission to alter the local Kubernetes cluster. - -The `restricted-admin` can create other `restricted-admin` users with an equal level of access. - -A new setting was added to Rancher to set the initial bootstrapped administrator to have the `restricted-admin` role. This applies to the first user created when the Rancher server is started for the first time. If the environment variable is set, then no global administrator would be created, and it would be impossible to create the global administrator through Rancher. - -To bootstrap Rancher with the `restricted-admin` as the initial user, the Rancher server should be started with the following environment variable: - -``` -CATTLE_RESTRICTED_DEFAULT_ADMIN=true -``` -### List of `restricted-admin` Permissions - -The following table lists the permissions and actions that a `restricted-admin` should have in comparison with the `Administrator` and `Standard User` roles: - -| Category | Action | Global Admin | Standard User | Restricted Admin | Notes for Restricted Admin role | -| -------- | ------ | ------------ | ------------- | ---------------- | ------------------------------- | -| Local Cluster functions | Manage Local Cluster (List, Edit, Import Host) | Yes | No | No | | -| | Create Projects/namespaces | Yes | No | No | | -| | Add cluster/project members | Yes | No | No | | -| | Global DNS | Yes | No | No | | -| | Access to management cluster for CRDs and CRs | Yes | No | Yes | | -| | Save as RKE Template | Yes | No | No | | -| Security | | | | | | -| Enable auth | Configure Authentication | Yes | No | Yes | | -| Roles | Create/Assign GlobalRoles | Yes | No (Can list) | Yes | Auth webhook allows creating globalrole for perms already present | -| | Create/Assign ClusterRoles | Yes | No (Can list) | Yes | Not in local cluster | -| | Create/Assign ProjectRoles | Yes | No (Can list) | Yes | Not in local cluster | -| Users | Add User/Edit/Delete/Deactivate User | Yes | No | Yes | | -| Groups | Assign Global role to groups | Yes | No | Yes | As allowed by the webhook | -| | Refresh Groups | Yes | No | Yes | | -| PSP's | Manage PSP templates | Yes | No (Can list) | Yes | Same privileges as Global Admin for PSPs | -| Tools | | | | | | -| | Manage RKE Templates | Yes | No | Yes | | -| | Manage Global Catalogs | Yes | No | Yes | Cannot edit/delete built-in system catalog. Can manage Helm library | -| | Cluster Drivers | Yes | No | Yes | | -| | Node Drivers | Yes | No | Yes | | -| | GlobalDNS Providers | Yes | Yes (Self) | Yes | | -| | GlobalDNS Entries | Yes | Yes (Self) | Yes | | -| Settings | | | | | | -| | Manage Settings | Yes | No (Can list) | No (Can list) | | -| User | | | | | | -| | Manage API Keys | Yes (Manage all) | Yes (Manage self) | Yes (Manage self) | | -| | Manage Node Templates | Yes | Yes (Manage self) | Yes (Manage self) | Can only manage their own node templates and not those created by other users | -| | Manage Cloud Credentials | Yes | Yes (Manage self) | Yes (Manage self) | Can only manage their own cloud credentials and not those created by other users | -| Downstream Cluster | Create Cluster | Yes | Yes | Yes | | -| | Edit Cluster | Yes | Yes | Yes | | -| | Rotate Certificates | Yes | | Yes | | -| | Snapshot Now | Yes | | Yes | | -| | Restore Snapshot | Yes | | Yes | | -| | Save as RKE Template | Yes | No | Yes | | -| | Run CIS Scan | Yes | Yes | Yes | | -| | Add Members | Yes | Yes | Yes | | -| | Create Projects | Yes | Yes | Yes | | -| Feature Charts since v2.5 | | | | | | -| | Install Fleet | Yes | | Yes | Should not be able to run Fleet in local cluster | -| | Deploy EKS cluster | Yes | Yes | Yes | | -| | Deploy GKE cluster | Yes | Yes | Yes | | -| | Deploy AKS cluster | Yes | Yes | Yes | | - - -### Changing Global Administrators to Restricted Admins - -If Rancher already has a global administrator, they should change all global administrators over to the new `restricted-admin` role. - -This can be done through **Security > Users** and moving any Administrator role over to Restricted Administrator. - -Signed-in users can change themselves over to the `restricted-admin` if they wish, but they should only do that as the last step, otherwise they won't have the permissions to do so. - ## Global Permission Assignment Global permissions for local users are assigned differently than users who log in to Rancher using external authentication. @@ -135,13 +64,15 @@ The default roles, Administrator and Standard User, each come with multiple glob Administrators can enforce custom global permissions in multiple ways: -- [Changing the default permissions for new users](#configuring-default-global-permissions) -- [Configuring global permissions for individual users](#configuring-global-permissions-for-individual-users) -- [Configuring global permissions for groups](#configuring-global-permissions-for-groups) +- [Creating custom global roles](#creating-custom-global-roles). +- [Changing the default permissions for new users](#configuring-default-global-permissions). +- [Configuring global permissions for individual users](#configuring-global-permissions-for-individual-users). +- [Configuring global permissions for groups](#configuring-global-permissions-for-groups). -### Custom Global Permissions Reference +### Combining Built-in GlobalRoles -The following table lists each custom global permission available and whether it is included in the default global permissions, `Administrator`, `Standard User` and `User-Base`. +Rancher provides several GlobalRoles which grant granular permissions for certain common use cases. +The following table lists each built-in global permission and whether it is included in the default global permissions, `Administrator`, `Standard User` and `User-Base`. | Custom Global Permission | Administrator | Standard User | User-Base | | ---------------------------------- | ------------- | ------------- |-----------| @@ -171,6 +102,113 @@ For details on which Kubernetes resources correspond to each global permission, ::: +### Custom GlobalRoles + +You can create custom GlobalRoles to satisfy use cases not directly addressed by built-in GlobalRoles. + +Create custom GlobalRoles through the UI or through automation (such as the Rancher Kubernetes API). You can specify the same type of rules as the rules for upstream roles and clusterRoles. + +#### Escalate and Bind verbs + +When giving permissions on GlobalRoles, keep in mind that Rancher respects the `escalate` and `bind` verbs, in a similar fashion to [Kubernetes](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update). + +Both of these verbs, which are given on the GlobalRoles resource, can grant users the permission to bypass Rancher's privilege escalation checks. This potentially allows users to become admins. Since this represents a serious security risk, `bind` and `escalate` should be distributed to users with great caution. + +The `escalate` verb allows users to change a GlobalRole and add any permission, even if the users doesn't have the permissions in the current GlobalRole or the new version of the GlobalRole. + +The `bind` verb allows users to create a GlobalRoleBinding to the specified GlobalRole, even if they do not have the permissions in the GlobalRole. + +:::danger + +The wildcard verb `*` also includes the `bind` and `escalate` verbs. This means that giving `*` on GlobalRoles to a user also gives them both `escalate` and `bind`. + +::: + +##### Custom GlobalRole Examples + +To grant permission to escalate only the `test-gr` GlobalRole: + +```yaml +rules: +- apiGroups: + - 'management.cattle.io' + resources: + - 'globalroles' + resourceNames: + - 'test-gr' + verbs: + - 'escalate' +``` + +To grant permission to escalate all GlobalRoles: + +```yaml +rules: +- apiGroups: + - 'management.cattle.io' + resources: + - 'globalroles' + verbs: + - 'escalate' +``` + +To grant permission to create bindings (which bypass escalation checks) to only the `test-gr` GlobalRole: + +```yaml +rules: +- apiGroups: + - 'management.cattle.io' + resources: + - 'globalroles' + resourceNames: + - 'test-gr' + verbs: + - 'bind' +- apiGroups: + - 'management.cattle.io' + resources: + - 'globalrolebindings' + verbs: + - 'create' +``` + +Granting `*` permissions (which includes both `escalate` and `bind`): + +```yaml +rules: +- apiGroups: + - 'management.cattle.io' + resources: + - 'globalroles' + verbs: + - '*' +``` + +#### GlobalRole Permissions on Downstream Clusters + +GlobalRoles can grant one or more RoleTemplates on every downstream cluster through the `inheritedClusterRoles` field. Values in this field must refer to a RoleTemplate which exists and has a `context` of Cluster. + +With this field, users gain the specified permissions on all current or future downstream clusters. For example, consider the following GlobalRole: + +```yaml +apiVersion: management.cattle.io/v3 +kind: GlobalRole +displayName: All Downstream Owner +metadata: + name: all-downstream-owner +inheritedClusterRoles: +- cluster-owner +``` + +Any user with this permission will be a cluster-owner on all downstream clusters. If a new cluster is added, regardless of type, the user will be an owner on that cluster as well. + +:::danger + +Using this field on [default GlobalRoles](#configuring-default-global-permissions) may result in users gaining excessive permissions. + +::: + + ### Configuring Default Global Permissions If you want to restrict the default permissions for new users, you can remove the `user` permission as default role and then assign multiple individual permissions as default instead. Conversely, you can also add administrative permissions on top of a set of other standard permissions. @@ -249,3 +287,82 @@ To refresh group memberships, 1. Click **Refresh Group Memberships**. **Result:** Any changes to the group members' permissions will take effect. + +## Restricted Admin + +:::warning Deprecated + +The Restricted Admin role is deprecated, and will be removed in a future version of Rancher (2.10 or higher). You should make a custom role with the desired permissions instead of relying on this built-in role. + +::: + +A new `restricted-admin` role was created in Rancher v2.5 in order to prevent privilege escalation on the local Rancher server Kubernetes cluster. This role has full administrator access to all downstream clusters managed by Rancher, but it does not have permission to alter the local Kubernetes cluster. + +The `restricted-admin` can create other `restricted-admin` users with an equal level of access. + +A new setting was added to Rancher to set the initial bootstrapped administrator to have the `restricted-admin` role. This applies to the first user created when the Rancher server is started for the first time. If the environment variable is set, then no global administrator would be created, and it would be impossible to create the global administrator through Rancher. + +To bootstrap Rancher with the `restricted-admin` as the initial user, the Rancher server should be started with the following environment variable: + +``` +CATTLE_RESTRICTED_DEFAULT_ADMIN=true +``` +### List of `restricted-admin` Permissions + +The following table lists the permissions and actions that a `restricted-admin` should have in comparison with the `Administrator` and `Standard User` roles: + +| Category | Action | Global Admin | Standard User | Restricted Admin | Notes for Restricted Admin role | +| -------- | ------ | ------------ | ------------- | ---------------- | ------------------------------- | +| Local Cluster functions | Manage Local Cluster (List, Edit, Import Host) | Yes | No | No | | +| | Create Projects/namespaces | Yes | No | No | | +| | Add cluster/project members | Yes | No | No | | +| | Global DNS | Yes | No | No | | +| | Access to management cluster for CRDs and CRs | Yes | No | Yes | | +| | Save as RKE Template | Yes | No | No | | +| Security | | | | | | +| Enable auth | Configure Authentication | Yes | No | Yes | | +| Roles | Create/Assign GlobalRoles | Yes | No (Can list) | Yes | Auth webhook allows creating globalrole for perms already present | +| | Create/Assign ClusterRoles | Yes | No (Can list) | Yes | Not in local cluster | +| | Create/Assign ProjectRoles | Yes | No (Can list) | Yes | Not in local cluster | +| Users | Add User/Edit/Delete/Deactivate User | Yes | No | Yes | | +| Groups | Assign Global role to groups | Yes | No | Yes | As allowed by the webhook | +| | Refresh Groups | Yes | No | Yes | | +| PSP's | Manage PSP templates | Yes | No (Can list) | Yes | Same privileges as Global Admin for PSPs | +| Tools | | | | | | +| | Manage RKE Templates | Yes | No | Yes | | +| | Manage Global Catalogs | Yes | No | Yes | Cannot edit/delete built-in system catalog. Can manage Helm library | +| | Cluster Drivers | Yes | No | Yes | | +| | Node Drivers | Yes | No | Yes | | +| | GlobalDNS Providers | Yes | Yes (Self) | Yes | | +| | GlobalDNS Entries | Yes | Yes (Self) | Yes | | +| Settings | | | | | | +| | Manage Settings | Yes | No (Can list) | No (Can list) | | +| User | | | | | | +| | Manage API Keys | Yes (Manage all) | Yes (Manage self) | Yes (Manage self) | | +| | Manage Node Templates | Yes | Yes (Manage self) | Yes (Manage self) | Can only manage their own node templates and not those created by other users | +| | Manage Cloud Credentials | Yes | Yes (Manage self) | Yes (Manage self) | Can only manage their own cloud credentials and not those created by other users | +| Downstream Cluster | Create Cluster | Yes | Yes | Yes | | +| | Edit Cluster | Yes | Yes | Yes | | +| | Rotate Certificates | Yes | | Yes | | +| | Snapshot Now | Yes | | Yes | | +| | Restore Snapshot | Yes | | Yes | | +| | Save as RKE Template | Yes | No | Yes | | +| | Run CIS Scan | Yes | Yes | Yes | | +| | Add Members | Yes | Yes | Yes | | +| | Create Projects | Yes | Yes | Yes | | +| Feature Charts since v2.5 | | | | | | +| | Install Fleet | Yes | | Yes | Should not be able to run Fleet in local cluster | +| | Deploy EKS cluster | Yes | Yes | Yes | | +| | Deploy GKE cluster | Yes | Yes | Yes | | +| | Deploy AKS cluster | Yes | Yes | Yes | | + + +### Changing Global Administrators to Restricted Admins + +In previous version, the docs recommended that all users should be changed over to Restricted Admin if the role was in use. Users are now encouraged to use a custom-built role using the cluster permissions feature, and migrate any current restricted admins to use that approach. + +This can be done through **Security > Users** and moving any Administrator role over to Restricted Administrator. + +Signed-in users can change themselves over to the `restricted-admin` if they wish, but they should only do that as the last step, otherwise they won't have the permissions to do so. + + From 9615b78a9ab79d259b439c76c75d33d1faeac84c Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Mon, 30 Oct 2023 10:48:33 -0400 Subject: [PATCH 05/65] #857 Add a API quickstart page under the (to be added) API section (#858) * initial engineering draft of api quickstart * spacing, headings * copyedit * note syntax added for last warning * link to create api keys page, copy edits * formating * Apply suggestions from code review Co-authored-by: Billy Tat * indentation, intro * Update docs/api/quickstart.md Co-authored-by: Billy Tat * Apply suggestions from code review Co-authored-by: Michael Bolot * Update docs/api/quickstart.md * removed commented-out line * moved text from step 5 to step 3, made into its own step, added info about not all resources offering detailed output * added sidebar entry * mv'd to version-2.8 dir * rm'd remainders in /docs dir --------- Co-authored-by: Billy Tat Co-authored-by: Michael Bolot --- versioned_docs/version-2.8/api/quickstart.md | 140 +++++++++++++++++++ versioned_sidebars/version-2.8-sidebars.json | 10 ++ 2 files changed, 150 insertions(+) create mode 100644 versioned_docs/version-2.8/api/quickstart.md diff --git a/versioned_docs/version-2.8/api/quickstart.md b/versioned_docs/version-2.8/api/quickstart.md new file mode 100644 index 000000000000..4529964d59ab --- /dev/null +++ b/versioned_docs/version-2.8/api/quickstart.md @@ -0,0 +1,140 @@ +--- +title: API Quick Start Guide +--- + +You can access Rancher's resources through the Kubernetes API. This guide will help you get started on using this API as a Rancher user. + +1. In the upper left corner, click **☰ > Global Settings**. +2. Find and copy the address in the `server-url` field. +3. [Create](../reference-guides/user-settings/api-keys#creating-an-api-key) a Rancher API key with no scope. + + :::danger + + A Rancher API key with no scope grants unrestricted access to all resources that the user can access. To prevent unauthorized use, this key should be stored securely and rotated frequently. + + ::: + +4. Create a `kubeconfig.yaml` file. Replace `$SERVER_URL` with the server url and `$API_KEY` with your Rancher API key: + + ```yaml + apiVersion: v1 + kind: Config + clusters: + - name: "rancher" + cluster: + server: "$SERVER_URL" + + users: + - name: "rancher" + user: + token: "$API_KEY" + + contexts: + - name: "rancher" + context: + user: "rancher" + cluster: "rancher" + + current-context: "rancher" + ``` + +You can use this file with any compatible tool, such as kubectl or [client-go](https://github.com/kubernetes/client-go). For a quick demo, see the [kubectl example](#api-kubectl-example). + +For more information on handling more complex certificate setups, see [Specifying CA Certs](#specifying-ca-certs). + +For more information on available kubeconfig options, see the [upstream documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). + +## API kubectl Example + +1. Set your KUBECONFIG environment variable to the kubeconfig file you just created: + + ```bash + export KUBECONFIG=$(pwd)/kubeconfig.yaml + ``` + +2. Use `kubectl explain` to view the available fields for projects, or complex sub-fields of resources: + + ```bash + kubectl explain projects + kubectl explain projects.spec + ``` + +Not all resources may have detailed output. + +3. Add the following content to a file named `project.yaml`: + + ```yaml + apiVersion: management.cattle.io/v3 + kind: Project + metadata: + # name should be unique across all projects in every cluster + name: p-abc123 + # generateName can be used instead of `name` to randomly generate a name. + # generateName: p- + # namespace should match spec.ClusterName. + namespace: local + spec: + # clusterName should match `metadata.Name` of the target cluster. + clusterName: local + description: Example Project + # displayName is the human-readable name and is visible from the UI. + displayName: Example + ``` + +4. Create the project: + + ```bash + kubectl create -f project.yaml + ``` + +5. Delete the project: + + How you delete the project depends on how you created the project name. + + **A. If you used `name` when creating the project**: + + ```bash + kubectl delete -f project.yaml + ``` + + **B. If you used `generateName`**: + + Replace `$PROJECT_NAME` with the randomly generated name of the project displayed by Kubectl after you created the project. + + ```bash + kubectl delete project $PROJECT_NAME -n local + ``` + +## Specifying CA Certs + +To ensure that your tools can recognize Rancher's CA certificates, most setups require additional modifications to the above template. + +1. In the upper left corner, click **☰ > Global Settings**. +2. Find and copy the value in the `ca-certs` field. +3. Save the value in a file named `rancher.crt`. + + :::note + If your Rancher instance is proxied by another service, you must extract the certificate that the service is using, and add it to the kubeconfig file, as demonstrated in step 5. + ::: + +4. The following commands will convert `rancher.crt` to base64 output, trim all new-lines, and update the cluster in the kubeconfig with the certificate, then finishing by removing the `rancher.crt` file: + + ```bash + export KUBECONFIG=$PATH_TO_RANCHER_KUBECONFIG + kubectl config set clusters.rancher.certificate-authority-data $(cat rancher.crt | base64 -i - | tr -d '\n') + rm rancher.crt + ``` +5. (Optional) If you use self-signed certificatess that aren't trusted by your system, you can set the insecure option in your kubeconfig with kubectl: + + :::danger + + This option shouldn't be used in production as it is a security risk. + + ::: + + ```bash + export KUBECONFIG=$PATH_TO_RANCHER_KUBECONFIG + kubectl config set clusters.rancher.insecure-skip-tls-verify true + ``` + + If your Rancher instance is proxied by another service, you must extract the certificate that the service is using, and add it to the kubeconfig file, as demonstrated above. diff --git a/versioned_sidebars/version-2.8-sidebars.json b/versioned_sidebars/version-2.8-sidebars.json index 8c30c5f4a133..0e0f3c923c83 100644 --- a/versioned_sidebars/version-2.8-sidebars.json +++ b/versioned_sidebars/version-2.8-sidebars.json @@ -1305,6 +1305,16 @@ } ] }, + { + "type": "category", + "label": "Rancher Kubernetes API", + "items": [ + "api/quickstart", + { + + } + ] + }, "contribute-to-rancher", { "type": "category", From 2fe51f116ea1e468b60046e7af04dedb01dfeed1 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Mon, 30 Oct 2023 17:02:53 -0700 Subject: [PATCH 06/65] Fix version-2.8 broken links --- .../set-up-cloud-providers/google-compute-engine.md | 2 +- .../workloads-and-pods/deploy-workloads.md | 2 +- .../integrations-in-rancher/harvester/overview.md | 6 +++--- .../monitoring-and-alerting/built-in-dashboards.md | 2 +- ...authentication-permissions-and-global-configuration.md | 2 +- .../pages-for-subheaders/monitoring-and-alerting.md | 2 +- .../pages-for-subheaders/use-existing-nodes.md | 2 +- .../tuning-and-best-practices-for-rancher-at-scale.md | 8 ++++---- 8 files changed, 13 insertions(+), 13 deletions(-) diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/google-compute-engine.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/google-compute-engine.md index ffd1cf6bdb48..11c20dcfd278 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/google-compute-engine.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/google-compute-engine.md @@ -8,7 +8,7 @@ title: Setting up the Google Compute Engine Cloud Provider In this section, you'll learn how to enable the Google Compute Engine (GCE) cloud provider for custom clusters in Rancher. A custom cluster is one in which Rancher installs Kubernetes on existing nodes. -The official Kubernetes documentation for the GCE cloud provider is [here.](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#gce) +The official Kubernetes documentation for the GCE cloud provider is [here.](https://github.com/kubernetes/website/blob/release-1.18/content/en/docs/concepts/cluster-administration/cloud-providers.md#gce) :::note Prerequisites: diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods/deploy-workloads.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods/deploy-workloads.md index 82d495cce05c..64d8e46af463 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods/deploy-workloads.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods/deploy-workloads.md @@ -45,7 +45,7 @@ Deploy a workload to run an application in one or more containers. - In [Amazon AWS](https://aws.amazon.com/), the nodes must be in the same Availability Zone and possess IAM permissions to attach/unattach volumes. - - The cluster must be using the [AWS cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws) option. For more information on enabling this option see [Creating an Amazon EC2 Cluster](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md) or [Creating a Custom Cluster](../../../../pages-for-subheaders/use-existing-nodes.md). + - The cluster must be using the [AWS cloud provider](https://github.com/kubernetes/website/blob/release-1.18/content/en/docs/concepts/cluster-administration/cloud-providers.md#aws) option. For more information on enabling this option see [Creating an Amazon EC2 Cluster](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md) or [Creating a Custom Cluster](../../../../pages-for-subheaders/use-existing-nodes.md). ::: diff --git a/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md b/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md index 43bcfb29db18..7146cc06f83c 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md @@ -6,7 +6,7 @@ Introduced in Rancher v2.6.1, [Harvester](https://docs.harvesterhci.io/) is an o ### Feature Flag -The Harvester feature flag is used to manage access to the Virtualization Management (VM) page in Rancher where users can navigate directly to Harvester clusters and access the Harvester UI. The Harvester feature flag is enabled by default. Click [here](../pages-for-subheaders/enable-experimental-features.md) for more information on feature flags in Rancher. +The Harvester feature flag is used to manage access to the Virtualization Management (VM) page in Rancher where users can navigate directly to Harvester clusters and access the Harvester UI. The Harvester feature flag is enabled by default. Click [here](../../pages-for-subheaders/enable-experimental-features.md) for more information on feature flags in Rancher. To navigate to the Harvester cluster, click **☰ > Virtualization Management**. From Harvester Clusters page, click one of the clusters listed to go to the single Harvester cluster view. @@ -24,7 +24,7 @@ The [Harvester node driver](https://docs.harvesterhci.io/v1.1/rancher/node/node- Harvester allows `.ISO` images to be uploaded and displayed through the Harvester UI, but this is not supported in the Rancher UI. This is because `.ISO` images usually require additional setup that interferes with a clean deployment (without requiring user intervention), and they are not typically used in cloud environments. -Click [here](../pages-for-subheaders/about-provisioning-drivers.md#node-drivers) for more information on node drivers in Rancher. +Click [here](../../pages-for-subheaders/about-provisioning-drivers.md#node-drivers) for more information on node drivers in Rancher. ### Port Requirements @@ -33,6 +33,6 @@ The port requirements for the Harvester cluster can be found [here](https://docs In addition, other networking considerations are as follows: - Be sure to enable VLAN trunk ports of the physical switch for VM VLAN networks. -- Follow the networking setup guidance [here](https://docs.harvesterhci.io/v1.1/networking/clusternetwork). +- Follow the networking setup guidance [here](https://docs.harvesterhci.io/v1.1/networking/index). For other port requirements for other guest clusters, such as K3s and RKE1, please see [these docs](https://docs.harvesterhci.io/v1.1/install/requirements/#guest-clusters). diff --git a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md index 9a464ba3ede2..63df3f53f3d7 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md @@ -114,4 +114,4 @@ For more information on configuring PrometheusRules in Rancher, see [this page.] ## Legacy UI -For information on the dashboards available in v2.2 to v2.4 of Rancher, before the introduction of the `rancher-monitoring` application, see the [Rancher v2.0—v2.4 docs](../../../versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cluster-monitoring/viewing-metrics.md). +For information on the dashboards available in v2.2 to v2.4 of Rancher, before the introduction of the `rancher-monitoring` application, see the [Rancher v2.0—v2.4 docs](/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cluster-monitoring/viewing-metrics.md). diff --git a/versioned_docs/version-2.8/pages-for-subheaders/authentication-permissions-and-global-configuration.md b/versioned_docs/version-2.8/pages-for-subheaders/authentication-permissions-and-global-configuration.md index 509c7bdedec9..42dc72fdefe8 100644 --- a/versioned_docs/version-2.8/pages-for-subheaders/authentication-permissions-and-global-configuration.md +++ b/versioned_docs/version-2.8/pages-for-subheaders/authentication-permissions-and-global-configuration.md @@ -82,4 +82,4 @@ The following features are available under **Global Configuration**: - **Global DNS Entries** - **Global DNS Providers** -As these are legacy features, please see the Rancher v2.0—v2.4 docs on [catalogs](../../versioned_docs/version-2.0-2.4/pages-for-subheaders/helm-charts-in-rancher.md), [global DNS entries](../../versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md#adding-a-global-dns-entry), and [global DNS providers](../../versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md#editing-a-global-dns-provider) for more details. \ No newline at end of file +As these are legacy features, please see the Rancher v2.0—v2.4 docs on [catalogs](/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm-charts-in-rancher.md), [global DNS entries](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md#adding-a-global-dns-entry), and [global DNS providers](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md#editing-a-global-dns-provider) for more details. \ No newline at end of file diff --git a/versioned_docs/version-2.8/pages-for-subheaders/monitoring-and-alerting.md b/versioned_docs/version-2.8/pages-for-subheaders/monitoring-and-alerting.md index e76dba53edbc..5dd758fab603 100644 --- a/versioned_docs/version-2.8/pages-for-subheaders/monitoring-and-alerting.md +++ b/versioned_docs/version-2.8/pages-for-subheaders/monitoring-and-alerting.md @@ -11,7 +11,7 @@ The `rancher-monitoring` application can quickly deploy leading open-source moni Introduced in Rancher v2.5, the application is powered by [Prometheus](https://prometheus.io/), [Grafana](https://grafana.com/grafana/), [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator), and the [Prometheus adapter.](https://github.com/DirectXMan12/k8s-prometheus-adapter) -For information on V1 monitoring and alerting, available in Rancher v2.2 up to v2.4, please see the Rancher v2.0—v2.4 docs on [cluster monitoring](../../versioned_docs/version-2.0-2.4/pages-for-subheaders/cluster-monitoring.md), [alerting](../../versioned_docs/version-2.0-2.4/pages-for-subheaders/cluster-alerts.md), [notifiers](../../versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/notifiers.md) and other [tools](../../versioned_docs/version-2.0-2.4/pages-for-subheaders/project-tools.md). +For information on V1 monitoring and alerting, available in Rancher v2.2 up to v2.4, please see the Rancher v2.0—v2.4 docs on [cluster monitoring](/versioned_docs/version-2.0-2.4/pages-for-subheaders/cluster-monitoring.md), [alerting](/versioned_docs/version-2.0-2.4/pages-for-subheaders/cluster-alerts.md), [notifiers](/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/notifiers.md) and other [tools](/versioned_docs/version-2.0-2.4/pages-for-subheaders/project-tools.md). Using the `rancher-monitoring` application, you can quickly deploy leading open-source monitoring and alerting solutions onto your cluster. diff --git a/versioned_docs/version-2.8/pages-for-subheaders/use-existing-nodes.md b/versioned_docs/version-2.8/pages-for-subheaders/use-existing-nodes.md index 2aeb05bc488e..ef751d50e55a 100644 --- a/versioned_docs/version-2.8/pages-for-subheaders/use-existing-nodes.md +++ b/versioned_docs/version-2.8/pages-for-subheaders/use-existing-nodes.md @@ -103,7 +103,7 @@ If you have configured your cluster to use Amazon as **Cloud Provider**, tag you :::note -You can use Amazon EC2 instances without configuring a cloud provider in Kubernetes. You only have to configure the cloud provider if you want to use specific Kubernetes cloud provider functionality. For more information, see [Kubernetes Cloud Providers](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) +You can use Amazon EC2 instances without configuring a cloud provider in Kubernetes. You only have to configure the cloud provider if you want to use specific Kubernetes cloud provider functionality. For more information, see [Kubernetes Cloud Providers](https://github.com/kubernetes/website/blob/release-1.18/content/en/docs/concepts/cluster-administration/cloud-providers.md) ::: diff --git a/versioned_docs/version-2.8/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md b/versioned_docs/version-2.8/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md index 865f1d32f6ec..16707e39feb5 100644 --- a/versioned_docs/version-2.8/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md +++ b/versioned_docs/version-2.8/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md @@ -28,8 +28,8 @@ Etcd is the backing database for Kubernetes and for Rancher. The database may ev This is typical in Rancher, as many operations create new `RoleBinding` objects in the upstream cluster as a side effect. You can reduce the number of `RoleBindings` in the upstream cluster in the following ways: -* Limit the use of the [Restricted Admin](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions#restricted-admin) role. Apply other roles wherever possible. -* If you use [external authentication](../../../pages-for-subheaders/authentication-config), use groups to assign roles. +* Limit the use of the [Restricted Admin](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md#restricted-admin) role. Apply other roles wherever possible. +* If you use [external authentication](../../../pages-for-subheaders/authentication-config.md), use groups to assign roles. * Only add users to clusters and projects when necessary. * Remove clusters and projects when they are no longer needed. * Only use custom roles if necessary. @@ -59,7 +59,7 @@ You should remove any remaining legacy apps that appear in the Cluster Manager U ### Using the Authorized Cluster Endpoint (ACE) -An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters#4-authorized-cluster-endpoint) (ACE) provides access to the Kubernetes API of Rancher-provisioned RKE, RKE2, and K3s clusters. When enabled, the ACE adds a context to kubeconfig files generated for the cluster. The context uses a direct endpoint to the cluster, thereby bypassing Rancher. This reduces load on Rancher for cases where unmediated API access is acceptable or preferable. See [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters#4-authorized-cluster-endpoint) for more information and configuration instructions. +An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) (ACE) provides access to the Kubernetes API of Rancher-provisioned RKE, RKE2, and K3s clusters. When enabled, the ACE adds a context to kubeconfig files generated for the cluster. The context uses a direct endpoint to the cluster, thereby bypassing Rancher. This reduces load on Rancher for cases where unmediated API access is acceptable or preferable. See [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) for more information and configuration instructions. ### Reducing Event Handler Executions @@ -93,7 +93,7 @@ You should keep the local Kubernetes cluster up to date. This will ensure that y Etcd is the backend database for Kubernetes and for Rancher. It plays a very important role in Rancher performance. -The two main bottlenecks to [etcd performance](https://etcd.io/docs/v3.4/op-guide/performance/) are disk and network speed. Etcd should run on dedicated nodes with a fast network setup and with SSDs that have high input/output operations per second (IOPS). For more information regarding etcd performance, see [Slow etcd performance (performance testing and optimization)](https://www.suse.com/support/kb/doc/?id=000020100) and [Tuning etcd for Large Installations](../../../how-to-guides/advanced-user-guides/tune-etcd-for-large-installs). Information on disks can also be found in the [Installation Requirements](../../../pages-for-subheaders/installation-requirements#disks). +The two main bottlenecks to [etcd performance](https://etcd.io/docs/v3.4/op-guide/performance/) are disk and network speed. Etcd should run on dedicated nodes with a fast network setup and with SSDs that have high input/output operations per second (IOPS). For more information regarding etcd performance, see [Slow etcd performance (performance testing and optimization)](https://www.suse.com/support/kb/doc/?id=000020100) and [Tuning etcd for Large Installations](../../../how-to-guides/advanced-user-guides/tune-etcd-for-large-installs.md). Information on disks can also be found in the [Installation Requirements](../../../pages-for-subheaders/installation-requirements.md#disks). It's best to run etcd on exactly three nodes, as adding more nodes will reduce operation speed. This may be counter-intuitive to common scaling approaches, but it's due to etcd's [replication mechanisms](https://etcd.io/docs/v3.5/faq/#what-is-maximum-cluster-size). From 0343d14cc6a1a118fe97efeea68f723e83964c2b Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Tue, 31 Oct 2023 11:30:29 -0700 Subject: [PATCH 07/65] Update header: More from SUSE --- docusaurus.config.js | 73 ++++++++++++++++++++++++++++++++++++-------- 1 file changed, 60 insertions(+), 13 deletions(-) diff --git a/docusaurus.config.js b/docusaurus.config.js index 0cba49a74d78..b5e13f6f8cbf 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -88,26 +88,73 @@ module.exports = { }, items: [ { - type: "localeDropdown", - position: "right", + type: 'docsVersionDropdown', + position: 'left', + dropdownItemsAfter: [{to: '/versions', label: 'All versions'}], + dropdownActiveClassDisabled: false, }, { - href: 'https://github.com/rancher/rancher-docs', - label: 'GitHub', - position: 'right', - className: 'navbar__github', + type: "localeDropdown", + position: "left", }, { - href: 'https://www.rancher.com', - label: 'Rancher Home', - position: 'right', + type: "search", + position: "left", }, { - type: 'docsVersionDropdown', - position: 'left', - dropdownItemsAfter: [{to: '/versions', label: 'All versions'}], - dropdownActiveClassDisabled: false, + type: 'dropdown', + label: 'Quick Links', + position: 'right', + items: [ + { + href: 'https://github.com/rancher/rancher', + label: 'GitHub', + }, + { + href: 'https://github.com/rancher/rancher-docs', + label: 'Docs GitHub', + }, + ] }, + { + type: 'dropdown', + label: 'More from SUSE', + position: 'right', + items: [ + { + href: 'https://www.rancher.com', + label: 'Rancher', + }, + { + type: 'html', + value: '
', + }, + { + href: 'https://elemental.docs.rancher.com/', + label: 'Elemental', + }, + { + href: 'https://epinio.io/', + label: 'Epinio', + }, + { + href: 'https://fleet.rancher.io/', + label: 'Fleet', + }, + { + href: 'https://harvesterhci.io', + label: 'Harvester', + }, + { + type: 'html', + value: '
', + }, + { + href: 'https://opensource.suse.com', + label: 'More Projects...', + }, + ] + } ], }, footer: { From 63604abb7bf6e1212424a56f14a78317706576b7 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Tue, 31 Oct 2023 11:37:08 -0700 Subject: [PATCH 08/65] Fix quoting (single vs double) consistency --- docusaurus.config.js | 81 ++++++++++++++++++++++---------------------- 1 file changed, 40 insertions(+), 41 deletions(-) diff --git a/docusaurus.config.js b/docusaurus.config.js index b5e13f6f8cbf..a00bfc10b629 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -14,14 +14,14 @@ module.exports = { projectName: 'rancher-docs', // Usually your repo name. trailingSlash: false, i18n: { - defaultLocale: "en", - locales: ["en", "zh"], + defaultLocale: 'en', + locales: ['en', 'zh'], localeConfigs: { en: { - label: "English", + label: 'English', }, zh: { - label: "简体中文", + label: '简体中文', }, }, }, @@ -69,8 +69,8 @@ module.exports = { //... other Algolia params }, colorMode: { - // "light" | "dark" - defaultMode: "light", + // 'light' | 'dark' + defaultMode: 'light', // Hides the switch in the navbar // Useful if you want to support a single color mode @@ -80,7 +80,7 @@ module.exports = { additionalLanguages: ['rust'], }, navbar: { - title: "", + title: '', logo: { alt: 'logo', src: 'img/rancher-logo-horiz-color.svg', @@ -94,12 +94,12 @@ module.exports = { dropdownActiveClassDisabled: false, }, { - type: "localeDropdown", - position: "left", + type: 'localeDropdown', + position: 'left', }, { - type: "search", - position: "left", + type: 'search', + position: 'left', }, { type: 'dropdown', @@ -206,7 +206,7 @@ module.exports = { blog: false, // Optional: disable the blog plugin // ... theme: { - customCss: [require.resolve("./src/css/custom.css")], + customCss: [require.resolve('./src/css/custom.css')], }, googleTagManager: { containerId: 'GTM-57KS2MW', @@ -1228,61 +1228,61 @@ module.exports = { from: '/v2.6/explanations/integrations-in-rancher/opa-gatekeeper' }, // Redirects for restructure from PR #234 (end) { - to: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24", - from: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.24" + to: '/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24', + from: '/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.24' }, { - to: "/v2.7/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24", - from: "/2.7/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.24" + to: '/v2.7/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24', + from: '/2.7/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.24' }, { - to: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", - from: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.25" + to: '/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27', + from: '/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.25' }, { - to: "/v2.7/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", - from: "/v2.7/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.25" + to: '/v2.7/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27', + from: '/v2.7/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.25' }, { - to: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24", - from: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.24" + to: '/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24', + from: '/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.24' }, { - to: "/v2.7/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24", - from: "/v2.7/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.24" + to: '/v2.7/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24', + from: '/v2.7/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.24' }, { - to: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", - from: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.25" + to: '/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27', + from: '/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.25' }, { - to: "/v2.7/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", - from: "/v2.7/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.25" + to: '/v2.7/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27', + from: '/v2.7/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.25' }, { - to: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24", - from: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.24" + to: '/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24', + from: '/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.24' }, { - to: "/v2.7/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24", - from: "/v2.7/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.24" + to: '/v2.7/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24', + from: '/v2.7/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.24' }, { - to: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", - from: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.25" + to: '/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27', + from: '/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.25' }, { - to: "/v2.7/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", - from: "/v2.7/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.25" + to: '/v2.7/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27', + from: '/v2.7/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.25' }, { - to: "/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale", - from: "/reference-guides/best-practices/rancher-server/tips-for-scaling-rancher" + to: '/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale', + from: '/reference-guides/best-practices/rancher-server/tips-for-scaling-rancher' }, { - to: "/v2.7/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale", - from: "/v2.7/reference-guides/best-practices/rancher-server/tips-for-scaling-rancher" + to: '/v2.7/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale', + from: '/v2.7/reference-guides/best-practices/rancher-server/tips-for-scaling-rancher' } ], }, @@ -1301,6 +1301,5 @@ module.exports = { type:'text/javascript', async: true }, - // "/scripts/optanonwrapper.js" ], }; From b7626ba379121b266aabea9a1c34d9887826a05b Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Tue, 31 Oct 2023 19:06:48 -0700 Subject: [PATCH 09/65] Fix link rendering issue --- .../resources/choose-a-rancher-version.md | 3 +-- .../resources/choose-a-rancher-version.md | 1 - .../resources/choose-a-rancher-version.md | 1 - .../resources/choose-a-rancher-version.md | 1 - .../resources/choose-a-rancher-version.md | 1 - .../resources/choose-a-rancher-version.md | 1 - 6 files changed, 1 insertion(+), 7 deletions(-) diff --git a/docs/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md b/docs/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md index b2f0e8515e86..7e8cc826dacc 100644 --- a/docs/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md +++ b/docs/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md @@ -29,7 +29,6 @@ Rancher provides several different Helm chart repositories to choose from. We al | rancher-stable | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments. | | rancher-alpha | `helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha` | Adds a repository of Helm charts for alpha versions of Rancher for previewing upcoming releases. These releases are discouraged in production environments. Upgrades _to_ or _from_ charts in the rancher-alpha repository to any other chart, regardless or repository, aren't supported. | -
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository). :::note @@ -40,7 +39,7 @@ All charts in the `rancher-stable` repository will correspond with any Rancher v ### Helm Chart Versions -Rancher Helm chart versions match the Rancher version (i.e `appVersion`). Once you've added the repo you can search it to show available versions with the following command:
+Rancher Helm chart versions match the Rancher version (i.e `appVersion`). Once you've added the repo you can search it to show available versions with the following command:     `helm search repo --versions` If you have several repos you can specify the repo name, ie. `helm search repo rancher-stable/rancher --versions`
diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md index 463a4ffe1f25..886bca362d8c 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md @@ -29,7 +29,6 @@ Rancher provides several different Helm chart repositories to choose from. We al | rancher-stable | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments. | | rancher-alpha | `helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha` | Adds a repository of Helm charts for alpha versions of Rancher for previewing upcoming releases. These releases are discouraged in production environments. Upgrades _to_ or _from_ charts in the rancher-alpha repository to any other chart, regardless or repository, aren't supported. | -
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository). > **Note:** The introduction of the `rancher-latest` and `rancher-stable` Helm Chart repositories was introduced after Rancher v2.1.0, so the `rancher-stable` repository contains some Rancher versions that were never marked as `rancher/rancher:stable`. The versions of Rancher that were tagged as `rancher/rancher:stable` before v2.1.0 are v2.0.4, v2.0.6, v2.0.8. Post v2.1.0, all charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`. diff --git a/versioned_docs/version-2.5/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md b/versioned_docs/version-2.5/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md index a4c53f09fc9c..f4d57f0bf7d4 100644 --- a/versioned_docs/version-2.5/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md +++ b/versioned_docs/version-2.5/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md @@ -33,7 +33,6 @@ Rancher provides several different Helm chart repositories to choose from. We al | rancher-stable | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments. | | rancher-alpha | `helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha` | Adds a repository of Helm charts for alpha versions of Rancher for previewing upcoming releases. These releases are discouraged in production environments. Upgrades _to_ or _from_ charts in the rancher-alpha repository to any other chart, regardless or repository, aren't supported. | -
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository). > **Note:** All charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`. diff --git a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md index b2f0e8515e86..d7051def448f 100644 --- a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md +++ b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md @@ -29,7 +29,6 @@ Rancher provides several different Helm chart repositories to choose from. We al | rancher-stable | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments. | | rancher-alpha | `helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha` | Adds a repository of Helm charts for alpha versions of Rancher for previewing upcoming releases. These releases are discouraged in production environments. Upgrades _to_ or _from_ charts in the rancher-alpha repository to any other chart, regardless or repository, aren't supported. | -
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository). :::note diff --git a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md index b2f0e8515e86..d7051def448f 100644 --- a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md +++ b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md @@ -29,7 +29,6 @@ Rancher provides several different Helm chart repositories to choose from. We al | rancher-stable | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments. | | rancher-alpha | `helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha` | Adds a repository of Helm charts for alpha versions of Rancher for previewing upcoming releases. These releases are discouraged in production environments. Upgrades _to_ or _from_ charts in the rancher-alpha repository to any other chart, regardless or repository, aren't supported. | -
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository). :::note diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md index b2f0e8515e86..d7051def448f 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md @@ -29,7 +29,6 @@ Rancher provides several different Helm chart repositories to choose from. We al | rancher-stable | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments. | | rancher-alpha | `helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha` | Adds a repository of Helm charts for alpha versions of Rancher for previewing upcoming releases. These releases are discouraged in production environments. Upgrades _to_ or _from_ charts in the rancher-alpha repository to any other chart, regardless or repository, aren't supported. | -
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository). :::note From 791c6ad859f54fe32d208f4b10011bc428e3283a Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 1 Nov 2023 16:21:59 -0700 Subject: [PATCH 10/65] Add 2.7.9 entry to versions table --- src/pages/versions.md | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/src/pages/versions.md b/src/pages/versions.md index 16daa17bb0ce..fde34a42f0f6 100644 --- a/src/pages/versions.md +++ b/src/pages/versions.md @@ -10,10 +10,10 @@ Below are the documentation and release notes for the currently released version - + - - + +
v2.7.8v2.7.9 DocumentationRelease NotesSupport MatrixRelease NotesSupport Matrix
@@ -33,6 +33,12 @@ Below are the documentation and release notes for the currently released version Below are the documentation and release notes for previous versions of Rancher 2.7.x: + + + + + + From 687d996dd1ef4d2926598bb276c133a9d8f4f82e Mon Sep 17 00:00:00 2001 From: Mike Latimer Date: Wed, 1 Nov 2023 18:52:55 -0600 Subject: [PATCH 11/65] Add initial Opni landing page Signed-off-by: Mike Latimer --- .../integrations-in-rancher/opni/opni.md | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/versioned_docs/version-2.8/integrations-in-rancher/opni/opni.md b/versioned_docs/version-2.8/integrations-in-rancher/opni/opni.md index c3eb52e6c331..96ec3eb73364 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/opni/opni.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/opni/opni.md @@ -2,6 +2,22 @@ title: Observability with Opni --- + + + + +Opni is a multi-cluster and multi-tenant observability platform. Purpose-built on Kubernetes, Opni simplifies the process of creating and managing backends, agents, and data related to logging, monitoring, and tracing. With built-in AIOps, Opni allows users to swiftly detect anomalous activities in their data. + +Opni components work together to provide a comprehensive observability platform. Key components include: + +- Observability Backends: Opni Logging enhances Opensearch for easy searching, visualization, and analysis of logs, traces and Kubernetes events. Opni Monitoring extends Cortex for multi-cluster, long-term storage of Prometheus metrics. +- Observability Agents: Agents are software that collects observability data (logs, metrics, traces, and events) from their host and sends it to an observability backend. The Opni agent enables collection of logs, Kubernetes events, OpenTelemetry traces, and Prometheus metrics. +- AIOps: Applies AL and machine learning to IT and observability data. Open AIOps features include log anomaly detection using pretrained models for Kubernetes control plane, Rancher and Longhorn. +- Alerting and SLOs: Triggers and reliability targets for services enables utilizing Opni data to effectively make informed decisions regarding software operations. + ## Opni with Rancher + +Opni’s Helm charts are currently maintained in a charts-specific branch of the Opni GitHub project. Once this branch is added as a repository in Rancher, the Opni installation can be performed through the Rancher UI. Efforts are underway now to streamline this process by including these charts directly within Rancher itself, and offering Opni as a fully integrated Rancher App. + +Opni’s log anomaly detection process includes purpose-built, pre-trained models for RKE2, K3s, Longhorn and Rancher agent logs. This advanced modeling ensures first class support for log anomaly detection for the core suite of Rancher products. -## Opni with Rancher Prime From b4d3c6061250eef39ff92841cb967a337fc7293a Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 2 Nov 2023 12:33:44 -0400 Subject: [PATCH 12/65] added color icons to /static/img --- static/img/header/icon-epinio.png | Bin 0 -> 22813 bytes static/img/header/icon-fleet.png | Bin 0 -> 40324 bytes static/img/header/icon-harvester.png | Bin 0 -> 6994 bytes static/img/header/icon-kubewarden.png | Bin 0 -> 9342 bytes static/img/header/icon-longhorn.png | Bin 0 -> 26769 bytes static/img/header/icon-opni.png | Bin 0 -> 39635 bytes static/img/header/icon-rancher-desktop.png | Bin 0 -> 8307 bytes static/img/header/icon-rancher.png | Bin 0 -> 1329 bytes static/img/header/icon-suse.png | Bin 0 -> 14008 bytes 9 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 static/img/header/icon-epinio.png create mode 100644 static/img/header/icon-fleet.png create mode 100644 static/img/header/icon-harvester.png create mode 100644 static/img/header/icon-kubewarden.png create mode 100644 static/img/header/icon-longhorn.png create mode 100644 static/img/header/icon-opni.png create mode 100644 static/img/header/icon-rancher-desktop.png create mode 100644 static/img/header/icon-rancher.png create mode 100644 static/img/header/icon-suse.png diff --git a/static/img/header/icon-epinio.png b/static/img/header/icon-epinio.png new file mode 100644 index 0000000000000000000000000000000000000000..40c5168d1721e291035a4b1fe265c8d96fb1ac19 GIT binary patch literal 22813 zcmdSBWmsH66D|mZBoIOf?iM__2PZheJ-E9=aQEOE+}+(hK+pgoxXch72G_yaL*%>n z?q1#fwLgI8nK|93s=E5Ex2mfNm6sJqeu4J_1_lOMQbI%#1_q821_rht;TiCWpn;$| z@CVUOLfsJthJozy2iCPfz!?VSu~yt(i*Ri+K#3c>RvaW6E6-fOA9Z) zw_<6Q4jDN-({G?=fkPnig@H?jm!|y2{)VEEmy|M>i-)3;my4mDl#HhEEg5O)TTZgd zcW>yv=g{=RNLan_Wl3*9&~|PsXdFsxq3?3P&op#mMM>b0rYW_-ou0FhN)#mT-DDD{ z3DZEH4JAHD;s5;#gupOg5vqgep>pamA_^MHa#D7NT7DjiMqXNqLW7y)dJwks=)IYc zbI(L`@#hOcQ^djU`~nbG8-11g*9-pzH_M9zo!Skv^PZ^6OtWuR%q=V07BIPDBrHs; z_wguy^gu{|DL}Z<@bTwvo?=J%TwIG9s-_FDTlUJUIS#_bubo%) z-q&m%X;<02ZP%Bx@1;X|o%coM7mzTGO55kXJ2ZcEh3EHV#7*G+~LWq!@m}U@e@Rs7ngSzSEy8H zf}Atza_@w0oQikt9~xZKGwswng>+SS{&Mv3w1zf8lbWP?YHR|F-0wiY(otZ45BGjb zr~e`U+qBzV?gVY8%iZ2X3O~2CPuIXDwVy6r{AfrtyAGEGZNlkBHJa*bY(7qzUyw4YgwI_-lHS4V3wX~UPYd7_iGX7-F@#96>L`PJX7>I z&Oc7!{hBF+MnP3=0BVzdCxD{ObJN7EpwGcKRw|`;677NlKEE^NcxWx0&K#o-B^njW z{Nt}L{d_&vIN}|?m804|9D=s^nDj+z$2rYU&GmlH8%N&`IcoY`FJgM!Yci+FKu#w- z0joYr*)0Kw#{IE;_%dRp-2OH-G3J>GQ&htSl8Hgs!kKB%bJ$LH;XpSt}*xfLBwZI?at<#69i#yC@i zMFM+=#Mjpreu)v}^ftQ7TBey#99{qCePH<&9dUp7T{N~l9_BB0eX96wgZwZQl=OaDyCf4`4_q%-ftB2XR=0@;uiicTJ0OCNuj@Q|V7a_b~mZ$4Lqh}k3yjeam+^tV5dtOHsMl^x(b{1TXXs2Mq^cGdK$ zdfejsy+oewYlmr8Q6f;JtIJ0q{{50W#K9*lRkn;Bo3o784+=h#d=CdIdMELwO}C%q z%tz=7?b?kg2>(2bke;k>|MsEI0qQbCT>NevKi2z6yc);m@=+B^G|19Hu>0x?ua6eaJXY@~!vAK?r0aqa=*?p!eTae~nY` z%R7J5-xyam-gvxV{v>{1ylk_x_G%6}EP#INIS9e%pTVf$$(hU#94K$^8$c+|^ZT`o z3#I}^$kO+h{h80L_zI~x6!WNl!#LoIa!-wHQ>0t}a zU~9(Wl(Kgj8V+Rie?5SN$qv@7Da)WXdFFs#pk=I&&*0=M_@sak6oupLPS{K@{^rke zb;H1~@1&nUk7hjB-g~7M?fjfF?Uh`ZteihnlUYYRBac!%P8eTO6vstv!+awcfMcGw@? z$(-&#yPG!?5r8bATTi;r!u zj|Tq%G8i~vs$Yt!Jp0GU){PjB=GszGJ#g_kmPlg}znx967->*BYE(b8)cQc0y-b)Y z=ivR_-$m5>{M3lMsJ+rFkoU_Zhrd&m@Dffs(I3TPuVxm4s;> zt|9|uPqoazyIzp<@6*?ZMyzTM>6sR2EIEx7;(PniR{pZY7J zFB%CGV)14+(;wLa8)5t;&6)%n#Q$JpHar6GYOeo3y!xYX+clo$@|rS6k<{9IazFvl z(W;&tlvlRn(OX&1&^n6UinII(cCG1! zm@@{ZlbnhoJxi|U+hy;xFPMt~WAK`c%sMY+>Od>fqzIa&bCHmb)Dm{A+oPnPju=ry#yp zm9BX05@Ehw@U45V&M9es>Ta#^B9LTf!T5!GH?9MYcaJV#Wn7C80gwpzHegw*n@BbQ ztyBaFM~(JTykKFUAiPdCvlJ_{>C~r~yDya^ z{-FfOD5rh$XDke?4DfVnv&5f`4+H1^;&t+Ni4^$n*3Fg#o<_$G{il%}ARC|45dJ@n z5CPd%6{Q#A;_?0`;?gpv07_=TUmIl)wNPE%JX-@&P__Hbr6kZvZ(kAT+%1pKXf( z0W(|I0VVD+^pKMU8o03GxB-i9!%H>(dqx}3yA4+yNU`)@Fd3jF?+cZ@{_)?LheJE5 zasAOD1YR87?1)>*2Knkd+Q_Ko$o%vTJKQ~*ybou}@Np8y)_Xy5KukjWAz;aFDYKtZ zYK`CakA}YfA?K@WgxshaK3_u01^95c9GpIuV|02Ls*He}BOd%2Ml~d*CFg`AEumuo zg5}j*RX(x9?Pj{!^4i_;J+CI-cvp$=!~t>0WJ$!B#5)#eSA&)u0uNdx)!9Uc`gZ+7 zNq+u&q2ogOXHxEkG*9TQ%ik4{jpAn=+Z#v+dR0Gt(YFT0oRulHfedn_Nq5?y%VpWi zEf-(ej+SLIcCpG*I>OaZ$y37D1156aHD;#m(rwwjfABIKzjOFZL?)A!O6KCc@WpnS z<#5ZPGNgmvfT_U4)O5%RDsX(fyrKE+v|tZ$n0I>nW?AebPj+KfQ6%Wx-U&)ge1Fi- zbuz2{b;=o>Fb!T~0$p`k`dnWJ`=?3kW7)o%i0e0Wd-IE#=2b}VMy(T152j1{2*A4h z)q%wk-kopt_|E+b%jY`mt)FC42bGZUvHx3hmuS}7jyz2vUZs!nn4Z6a_}GsWF%YHQ z(;4{um;91`ii-!U(^p#hw7gva^>?C3%Qn=f3BEtzk`c{x`$I;d90+o_D z%gD+usU3a8G<${oaA2}tsX)%P!s1-_?wcj0cRAqxoQ0+mPPl;AOzWTp65e-HK5)Uv zt+(QXDUZ3;8elJY=d_az!nu9no!{A(C!L_)s-b+Xuwb4b!nm<{!W%9)gup-mbS2nW zzli8bz~da_qdCil3EBbC72f4W!Tpa9zoZb+@>36_B zKa&yx!s2fXJW2v~UROTXzbefcfW$u;Q4wDY>iNR)zTgvl_ez)~0oc4kE$1bFQwmrB z8_@KS(S*LO*l-8prIkL?M=&t2E3GIu{240K30VVt*7X^#gJ|~;VCMzf=)4X2!%*xV z7eLKJL1PUO$E~l=wL&!VwVn`zyG%Lj`U?8(^g_9dOW`tK=-!yZg5Do`TtG7dq=} zI2H5X!}!IZRtHPVcqvTSgh<)Hn@&G_VA-js)piciUmO6FMJbAZvr>F7%qnujeK`GV$ku zyz2dWsb!TW+on^VO62u#*Splo&M9EE@4sWjN>FLaAKb1!l+&vxkipt$ecD3%k|&~E z++%((%8u`ZLhiS(tRdFBocA?rF_=)YanSpPiZR9g2@XK0S4TwtXrTy)9;x?8zJSNO zrC`~q_0OAI++3FkSf02gfF?J-OqW_KJLfXFUlz1M1TmI(_ zzP{doH(bxhvHlB*7(hsT)~_7J!KnCGi2o)!{~+&1uApSNV4fHhb@=P)K(t5we+KOT zyBIA({{u1sgnxVu87(^R>Ho7Six}Mg#nAsrNc5A({#8N{$p@e)8P3o@iXi~(i#w|! z+QL1-GdH!)4KP{hMA4c=|sJ@E<5L{ipsT1`x%p%|3*+xi1Saj4ddsF3gkM;h1n#3 zM*!au|M^Z(02a(pK9Ts}OaSN!D<$*czcQHFYrr7BqQEENe@$ri=z|FTpZ`a0Drknp zq`~5QLB)g%gBS7#?tcaO;w2yP?LJjrCigY}&r;&S)8YM$OOS}-|I1tX;Q;DEb2HJj z8{cN{*nrn$F(n<|eI0t=Sg|`}%`mvG2kk?t74!o} z!`4s*Nq)e$5)Ma7QFY16vor^nm>)=hGu7(MCau4n>C3T2%0Qu28EggS=J{Rv~gDx_HIF3(f}Pc*-K)h%F=?E*!QlQfg7BW#x;n%1d@OFH>SlTo#k;y`FRt=cN}+trRgbeN%g9~vZn-SXVhRP6G!f~_&b!-*Rty`cj z^l%8t>TuJv)Kts&2qaIh>J>DX)_5bY;qUP1#+_((EeS{4b{hu;@ef$IHpXoAvkf2< zl$+V-;s>3-JZOD%+}oL_JG=v_P`##iJrhyw1U1dYvtP|TEi5f?08^?+Ta2dPB{$&b7vFYVi;B2D<-pFf+Pfgm3 z?d_X#%_=2$rA}H7Y@NFG=d|~#S`!H?M5GE(;uZ92V5wji*)SgKXgI#UyrJ>&U*uhO z%F|Vzyj!pf0rcBne&{&_UnzK)%CY5EhptHe6B*V)nDaZqGoJ|aii<=yi;kn*pu1*|Ss-kav zI+*6&;Qp;qc)*m|2|c~Tmx<-wqD-`6-K7eO%83Y&qQ5F~7;!VqLT=J9Dysf*gF#@6 z=X661_HbK#?;odU;{_M5XTvxA2#~JuQ%wj1qTCEw@i$`BxdgUms7@ zGyEkW%8)lNM*n>D0xI}*^K`B?md3~-=|L3sxWbFdIgk6pUMd#?UjB@1@=MLEZt}~; z7`(@oujLHM)nXyL>FAsLJSM*?zx?91dwEvix_{tm1o1<}#0flQznQoNP+7~NBbj^` zJApckS?9R?f<3say#eZ1G)vGLpWXM4UQW*-y)mB>f#h{5T&>K&-1o~{w>4JhK(w!# zi(!lpS9OctE8wEQXnlZ9znn}^hKGe0QWebXZ?U*9#>`QE$b}@LLLctds5E(6$>xjt zK0eM^ev!`s%;*n?H9!67IAMi2SaI6t^84m8M#7Y7Ob zsTynq?)3%aDDD@{5Vf8bM4#{%1&#paejHif3|-{ed}Y9Ag+2(v#fQ#IKCe|=6SMtjeq+9PTQd1l*lRkrkq|wE7W=I2{#;(h=aEDd3DSf|i7g(s97pirMC=K*@a8 z^4v{KN1dhu4jr?nBYHAUXJ`ot90HE2pqTV#l;pC>-1WJtm529vIn+5xXDE|G%ytL; zoQw%E(z!4Zz-cyE^-DZV{E~>citR|%2BqE8vxQz=Ya2*};&)GD+{YISMSM2Vob*15i3}Jf00?N;w5F;GfY3gKW-Kpt9do)M0Wc!qp`3Mn3fzCQMy*iAdEHM zAPT=kqiANMp+b~Xb*AB4ZDZeHg|r?TcZHM;keWBc)pm-ANk>tMh=0OuiVLf?5IyG$WZtGw* zkAMyD^>nX{dEe|&B>TmoCgT({m%IQn&(FTn8|gUAp{!PVwO!5LkDUTRMfO7Za?lb= zKgsO|E_Q$uJwfOb33d_-(d0&!|1zoiUR(F)G^ls-otX8nx~cGPa(MiVw`9J8F-U`z zRa_-CZY&mW7^9gtw*>9)q?34&s}cAbRwXJjgk%WP!Of(>+h_2U!tmqf04wx^MH<8qSN%|~pEYQ{ zzZYd`RX22FkHd^>M~t}o1X0?*`Iwp;ut+grkv_4Y2E2v~wB$aQ`USd(WSoYeaUDju z_#aj-7C_PHJp)NisD23gf)$^An3BaH75QVvT039D2Jpq#&2Yv}6_9JR-e)>~w8S_R zfbT{)BLphixu79b2;u?>=#G7BIa6RIko0di^0AA<`2E6S+8c!zruYhCey-^b7ytEz z1MgqjeF1Fh>An`bUEQX9W3w|2to1c}aBXWNv~>gH}ATlR8m z`+ML?UBdg0SN41-qGV6?i2xorXNK{Znc;l!ON^?dMwGD8zCPNu5M%-B3lt==dik70 zSIYBKuZ8b8uau`6D0pwr3bP(g@5prBE(J8RI=8}TqM7|!66kT0wn3slJk#S7hgaN8Q6R!d_omWT16bIlziY07+P1qWLS6sIXcPMYS z?;F3byIOe+V$dQ$sh=@8j-z!tY_HD5DUGPMLexh=OVmNC#JifBz?QW3*w`ohrnYpuJFTnRP-oOK zS*c_8SxTsxN8SmZpNvFqGN1qSYi!%t-J#mSIHNVrrjB2sabUiHk`w$lD}^xHv=D(> z2PkKzcGAQ2gSY#d;d3cG1B|**h4}m1m)D2zl+Uc^SXIhOI%qdR+65y-x*ZReocHcP z`%z=1gp=8qZ1O&CxM|T}bZ{z>gyt544qu^ao_7-VUQs+G$8=`X!ed@aMyhxk+`T@~ z)r;pnc|C1LBEdWqeB}Woa&Yk^X5_jUH2^@nIT^0dkK8ZgFE6jZs$-PZw#pFBy+$DQdQ)iZAPk@-P!*K^i4O~-I?fi47WhJM54reOWf|N-F4uTi)-m8(I zBK4sP8trNRHGmN`@na5^be9`PlrVw%`Hc{OUD#o;2?H0mkokuM5H>dd*-7Mv%4o^K zioAa;j0I_!fA^0UQw^Uet+=(^zUrzGHy?(D=n53ol(23prR5@9AHU;)d*K%f%l$rN zl&I5Sq$W+xhOT+1{jk&BNv)07oO4RJ=9yi=3j`cVlB!($h*qhxa@IEe-Id|_11}YI zQ3oQ{s)uf9roNTCGn%QmxG9SpOzG?L zBNW4(h30HOzq^MUz9$>vL9*BqIK7W$%zWs;94XyhFu+U_A$O0?Eqx!oKsAL=J`?^@ zbQuBShz1;zb-o|pvETujDkaS{np7<4c#e*CxotsEF4VMUCKhO5{mWC;7-3MkIf}M~ z$ePsG92Gz8y)qS1a5LNtZJs zwT3E6AayrlmQ0sBj_oQ$h7S%)hNepSD?hx+u3 z%o_ikTFu#X$O>vruw-a5S5y^_!*F2jg!gUe(Mu^D`gVpy1O_#b`9ENF3|*1 zdq-j~%jG6DtGDePwS^Qj9h!7rLgW1UpWE+_DohZly3}&Z8=Wp885C;P+=@=P(*eG;SRBx0g&dnV4bsm2;g`rD z&WGjFJLn;8dCE2sr79zTMD}IJ__nuDgY`6_e{cD4+0LY zR=0M=ZX>QS5sa#Qj<*u6CqhW7&{UMPI}wR3{qcBXq=?skes9TT`SMtD((`I@_JW|y zOA8!+THw>+a|Yhrd^I^Vkw%hVGvf>sP=M ze|*p$XPiTMsoU91p2}9`f`a=wICsH~g$1n)9Ml~0?XCqe*F7h8Cq)#UwaE*p4g0iv z%8sQ;*2F@R#&88{^Q`txS#nSNJf3%T@>IE}^5h7Nl#A!ZY*E4eeCU1Cn{Zn6a{1H4 zG^>xt(D%KjT(X`iY=O*UrTHJ3lxa?&d#KAR?^8R;yTSPA@-=Q(*cOX{fj}vYT#@?v zh3^OF1QsN#Rlx6!Wt=g3|N>R(pRRL zxCkgr-i(xj&96^|CV5c2#@1IfyE?6|bt|>mt~cfenbZ|_?}~D!v7jgY{)|iS}qYiIjM|)f+BJ~Tj zeKHxl(HQTnDzo9r-6>uW=#4XktQQMRvAghJ9il{g?hqdmwL|snLDbMDnsB3r z0QaaPQ^C~(bXf`tn$}Z@XrAtE|H+O|CLg8Oc>EK(yEBE7=j%V702RAvzWWQm8{>)v z>}VCXSBCJ$zS+-sB2dw7@s-7-buL%FG%dT|Uz*DCsj$YU zm}D`GQfY_)ntpJ-+H3fv?)h6<8|j7YFDj$do}i8g8B>d)yB`LRq9JI%bGjUG(dFK- zou$J$4=cyUkr#BJ6mu)*x}|pu+pa3RScb0LpM0arT-CdrS+v=j{R9D3-v)|{Q_p^| zyKbN@;&Yfd+e7ix4qjvVM)rPp>%QUQD9IUXO4>Mc zp!3rSdN}2)MM&Crfg#bxKH`ebysawRL!aVWRU&iM5P#Q&@!9WwS;R_Qc#9I9gN&)n z28+r1$Kq3I&#&Hjq|26B-SOvklu=dIc6ud5ppiR`Fgi3v2+mE5`a_bc@nBrB%AfV5 zhJ!^bCMHnFkP3|}?q;T0@9lDKadB<llH| za@l7Czp9E#X;>Fl&x$ik8vTF7<*Em(nKtE{No&#TIyK9(Ljqtd3) zF9(~6P0wfB1HtSsS8qK6ruHHN3-QN^p@(gWvhD`ggHr+Svl5j>xZ@Q(*WVt>N7_7e zy+_|cug<*t)DJ(zc0mD(4%W<&p7($rm5|ACLRLB}ukFFJX{E)J>?e2V^p3r=3m`R0hGK zUC7QB=c%=%97nq7@a{~cjS=4T)W_0<+-mbUt(p`WNpGsR0|_z>6`T@lD7`DSayaKaF30zGp5?ojq@ky@B=b# z`r4i!YSxwQoj?{6U7>cJ9t)Fgyb|m%eqeS{%6SaymF6>T7hvQeS`2R%9`Q7b{mQ5p)tv<-1s+#=xd+27OAB0zuotJ z*?Yeq#?@jQH6~4cd^hwgD{s$(`8VBtxVm3qJY=lQGB{k1ayu$5TK~aQfkQ)PvL#1! zQ4Bw)T7~S!<%48%b{*gJ@Q@5=R z$FII#uEk7ix#)K*C(dz6B$&k@Po!;qcCW-lpGaq&1@}GMB`CqE=|RfP#ob| zW>dUj!Y$VFB`N}+7q?XnuP&gbAf6|EA5%ZSLZZBty2$v5jcUTR>y7TGNTwcw)_E_# zv1-cC>o6PQF_M(1GQ1cIS~NTDjD&57ZIerPt(xdV9WsWb<^n!@zw3{fviLk<`8C1R z5*QQM(zOreJnmZqz(s(isUJjc4iZ_h?AG+T&&V?QNhgi!1`3H=rENYQG@JD(`LTpS zWP(cZ7L|17+ zkx2pb!&MWf7v!W2LM>J%W5b$b#Ly)-rC=Wt9xl$atoM3_F;w?G&KOe+u8zAkyID8y z4i8F$FI+>c5jgT+N@xC}!}OE1YfQU|DuRltLxN1eb7vjc<9Nph=STyc?RrA}Q6v@9 zxZxH2VI6tdCSWQ269H&!`*S0;_oGJR;cK$xe0s<4m-+Q0U#N{`B)4qzn9p0!mp4U^w57Jz%5_1-Y_~~qtn#oDXF6_oakZEb9yFL3`HCt{OrLOD=aA+xtqdC zX}wnKmZ#I^yi%;@@N;b>a)!#ti2fo&n_@%Ti%S2#r<5$kz-lUQdU83hh*+Lw>@o(F@NBb$+0T)=Tg)aAAv z?lQ-{_yrOk&7pqE&Gk&~E_hMfSw%)kJbi6liI4PRjB?z=l3XzS%WhwDd0%_lI6m~d z;yjDDhRq51Gn1vf^v1+Paqz= zkDXNk!=Sq9PltmN$uEb-@#AX@XXd7?Pq1dc)Y_~Yy2*vgaSGupyM_GvH4}AUmI7%} zCSvT6VBp6dTud^3Iq!UbU)*JyU+zM$@X0gZa^*5YrdG#jvY^gEHABF7&b{ng?ye5o zeetH=SWf4w_u3+NQMY?ss;n-DHLb@7+L3%7c{uxH>K**eRhU#_3h_z?gh>5`Rb6&s z7j3mlhdi}ykb9%KmKgwyG^HQm+IZWG9Mw z#M@ltIB04mbK??mrX9xmzO%E)oc7l*C4kUt(NKm=LP-fD(o0ZAwl&7S{tVu!eR585 zBDQHcY^mv}$=xOc89|UcNJEEA`U-o&k}%=sVjV>UpAlll=H?fCTQG@`p69SOf*f3} zu{I8lr4gGH^eLk#&;(r9pWH`&Z}1qdT#D7L6TJA4($bZ$uOaNEIW@#ilgYCs*b;Cr zo%QaRTt?Nz^Ym;fLE9qho$1f&^IGA<)IJ!m1lAus5xjByHW}qovHUehf|9*2olr6w z6UVC2tAe)cUXhaAh#a=oGL^A>;NVY7jc+$Rm^&NYk{i92xL1^tC>XO|wDxWZZ zVE56;(!V4PMU4UMns0F&Tiag}vQbc)QNb}ev>WKr;+1vJjPVq?0mywXbH3eJ*=_1n zmx9|z%GM0m#Th6o;_KpF7}8@}j?;%yb{!Sh!o>5xB@Pl+kD^c@r6{ottrgaeHnevY zJLU6gG`pTol%Afv#VcxXUdY?Pgrhpho{rvXfHiTd2#ew^xpg5%FMvid^72s3O*_~+(C|Y^0G7)Qn%JT$T{gZUpy_{vK=YY z`BYx(rWig-1ghk}E8yQ>+d;ATkqdoQ?*&-IBjAnE6q|+^zv3#{?pB1%GWl$%jK%9W zoU{e%G&KdaPX#697Oz^#hOZmW)=qL&5I>q)b7^##X?mXv?U12!x4M__A1HhQV^@^U zK9^fQav~j+tL*sG;o=&al{oe*Wl|;j>U;RIhwl4>Avp#PNP@KU2kE=d_09gMG&K|s zO-dxahavr@6hw(oS%S|mnu5=`QR&Y*r5ArrU^?cv+y4ZwUaO0$&>I?mc1dr);u6AKry9>0b8#0$y3_1A`yS1wfG`rh`Z&zp5)Gl<08;MrwQ`Fe8gM9T5i zT3uX}G;vC;$I?+B9By!t++}S|$9Jj5Z_$gI$^ueeH_+Iy{-FZ)iae>^1u<22XCoD$ zqDbxR6zh~FCeR?4PLs^1nPvBoCKVbLp0F!oWo+ZF_1;EfsU|k%IEaP5&i0d-=Y2x@ zdap^{6{{LPW0`2}dxXJrk~aBDWRGx&38qR@1bwLuAuXf%sEVxvUK)2VXwo8+-@f)v zZlA2<=?|V}LJ__v5P6`*WRx^1IB@Pr7LjrZrDT})7rqQ-yi_Hv3GNQ{=kOPnxUqO@ zacJ1;w6KHBE!`MHsWHKQ+ed}KkzsKD$C(f_45+1p`a9gkA0|>s3JI6jSemI?pZgPr z({jtwxLJW8md6u=FEWJc5YEtc$%Tb^5ISQ`kvpbh7n`(6-4LluGd<+NAkjNE9!rxt zPnRAoIkA9BOl>p2KXGKN&8@b~L`mqAtD~ZoNXBi)WYw2-BORo{i_60P#>(}Xv4glp$>ojT)GCe6&-A^Dr21-bRZ-u$oA{Q5iF2a+*hyOXpb3uY&vGtpQ(d9TN+ zVQv6f?wu8Lnul270A@j-`1G(POxXHikcH3VXsZ12z`jXk9ZnG#iiBl1OC?RUxQHIg z2Isl8k$kzqwAzrQ*@i8Q>H3I+y;^ZX+wDOMi#iIa?$tPc43M=;zr@~8^;rX{#0nI+ zVh&gh48Y9DhAwY?TA)3)Lr6zDaTB~s&L6Z2`Uxa#UcbEp3@R0TW^$Z8 zTK_l$kDDLapen2*YhDqk~8fl2$-GEPNID)Dx^ zA^KDbU+kpWLBM(uKlx+i#M4^^q@=i54DHN#EVYg3I{T<@%<9xpNQIo}uf&+RD@{Q$ zk*RMV3hwzCw~(~_0=E&)X+A*X6O<0TRK_m@O}V&OM0>Y?&+Li0!=xyNlX8ljsk$Dqi1xI!^0N;gSIZ; z|1Uk_z9z6hi;)U151ep39;*CTz1F`!!2?c${+9y&?|YKa&5s zcn1UP1sqqMom2enTz&lmIE(^XuHsJKQ2y^PRJFwinbjEV?OLAl#gkI3RB6hAc6(e+ z;P8DcMjX-J_pz6O@QU3r-k=wMD`IpGf0aO47e0!!zJQq8JUQ_Tni4Yw)dm7Jf*H^q zHj#_jV~)RQbzEgd*S>SD+vT;^#v{duQkVKAyvVfc4(fu^={f0Us4E>2y8Ti=Gp5|# zO?$T*xT4&4R|<5?>|YCi+3FJyYO}f8=V1i3y580$jSs66V(RZBN+!RqJnLL-m3*X; zcf%pe&F|^HQ`fs$K?f$Pot;1r-#&K)}zXMuj0{6xN2TD59CnmtE$Pm zTU}XiMf+?*fYMSHJEgVSClMT739)UMq7vc9Q}@PO$W@gMUHjGaXiyU;$Ko5zke8_~ zH-VqRh>8p%>lX%0DX@TgNuZA|7o*Gc9`nu(^6>ixk?eY)j+;oPf|PZ2wC&^dDSI_+ z25T`CDG;c;cajRAPOLPRkSU?~*bn7XF=l2qh>0nj2qW5_bc_@6ekymaHNZsYH^pL7 zPuRs;uOwkQZj29<*HhG*WLXW0KDE&XSYH@$|4eskdd6w>jUmg>IE|i#@9Mpf1GPd; zEMGXKk zBavTdl63RqJTE z_)9n9Qpmt@baudvw|4kK?DqDWUyY;UWn;FsyS1$W@^XT! zGFRyi19W-R7J6wwF9r48N=q45gr^}u(K8;)e;q^qY<^_bWL@r?1J7w0UG=@d*cXIl zY393ojm@^IbdbpKq>Vj7(HfCJnwA}V{cxijlymfBTghWB40y8A{p z-!L7c4R`xVYMQ!C-2izz&`tfzaWP7j{yU@PcPF3Y^qtB9O_p)$Fn8FvcA)iSYcwmS zWe0LarrTxaevz6#H|T6z_a?*V>0A7T-!+KywziSPp*LHOBc-WIAPV`LKyCw$A96rC zPq9?`o#0EW?hJK@%YM&96>u4%hW+i51+m!-K~YtA4bUXkZm#&9{r;E!h>( zxVl?}5O-cpnNuoQm(j;VZHV2Ny`ncAQr+$zt8f_Tb*I3{`)Wv+fqBy|JdVa?Xk;tO z^ZcZxrEGIJh?CT52Wou?RLZXMv>FMYr_z^Yz-U+@S`> z`70i2tU=kq4`cD!PT#JS>SjjqG8$=FXkiBN`=z&~@M0M2QE`yM4XkNQJ!2WN0*;*+=_0qW_wGAYm4+WUF^zLP{Ehcor9NrWMlz2 z1QW_k=SU?kFMeR)R(>Cp1b6BEtd3OsKICmp#Na%>S$mt@u4>wOEdyhzk#ATy2M2b~CNA7Yd<Jb@#a}!IP}Ry+Ou9YrgdFKtw}>{a;`|)* zUVztzl)#djcZnyg48ME3xmL*aT;`3GX^i7nypm@LT6c;LjA4#66UVW`m-?Y0g+KyJ zi=D%cJwB)tpZQvaUiU+}@DW;5K^{+Qc_qDg@JfWqT6b#0@(b}`Pj zSMDJ<#SZ=uO^x6` z;YP|v4)2W}6Ms^C{cIY0qc;=bs+-_fu(wD>9#TIwHzP?FcQKBiRwFq>>-EmVYT+V3 z`SbPQQMYEAklw0sJo=;G`uc@x>7k({aZ1O84_{h~mthZynj8a}JB>4xu-dbwO>M~1 zHC2o(=wOJ?MG_Ju!H~?dRW6-GlXbQBlXDz5zEhh)>^W7m1b@P7^obk~b^mqvX&J(j zkPo|5#EA;A+CfgI90a`H`tMavlZnQQKeq;(RU>^2`PstV2FtaJE0geKiOHTJ^XPb{FC?sdsc4Qv)SS}F;69&7Jnfwqk|LX zz{%K;C_59m-j)yG_!x5I?e;tPW3lF0^R-!VXCY*hVujE8D(isYaBuRsPuctYLYJ=k z>*rh7rg#W(y>*-EyEd0rA&E5<4HbDn⪚3mapbzA!L~0PtjsJHqV!Dsi>%EVAW!7 z|1^(f4arFJHu7}9)GCYaYnb%&hQ8o#?09axH$2CMdyMAEomUbYTtIc(6kCNoTIoi_ z(lF$z?^DL7XXkQcP{eb4=LWgKTu6DsT>r`tvvFE)R#EKkN6EY^y^JRi2E!pta#;FWIE7X zU@s1oFC=5QjeR8V5{R6>%BHJ!=8 z917P#o|0c82{T=lGp|czqmaRoU?XdZO7{%vN^`;#wTX8t^5$#bpA;ZZZq$Hd4-}N9 z-|wTB2H>URap7<&b#b||o(`S=B!876$t+{xgF}@^qhyrRu(U)F@7t)w5kMDb{vK2q zM#04O#%~B74jpJP(}M7Ol;8GrUG^F`4VnltB9j}sq4m6R;XnoPjUA0uW+vBOH(IrH zN5p5eM><~IwRg#@2i+6Ljjd1ZE_a-;KGoP& zy7|p}i;ni$5`}Bg$cRb;<*w|0V~WiY1g+RxTPKB;MA;oALg9u?$$fNfz9>8OWCHVe z;sybX5t%UQk;wQiIhrbKeD&Fd6o$Ju1tl$$t2*s_i2U2=cTfVkA2S{S*tBwYoz!kS;wgZd^VAE|#Qu+-oA9Sr- zKHk#iDjM1FzRGFWpPgTrAY7iC`I3L%)VotV#pYr(M{yZLWvIA5|1f8P$N@>U##M>j z?=4H^4xZ(3F&R!g#AO&K?7A5TxB0lh{PY;W9t{7%_7hpsVVp4jM(+RW=G_08ZvQym zVGhfoj7T{YnM6)C!W^nGXCVnW#OQWDq?}?)$~lK=PD4ba#EpcFZGGje9CAoegkd*x z3PbpQG=1;?;rsjN^Lbo{_jT>s>w3MO)fHNE$4_P7=P`;c5ylDFXF-L;hs=u3$jBGS z!&EBewV(x3O3-=(hijiT@LMBFAj^O$|gk)wdi#y438qY|QY3+Zt) zZL?xkF}^Jqcg*SpfSf5Z3xeBrK1{rDpT*m*Z2^&`*>Xx@` za7gQ`PE4)2KxjqZZmV|A_?a?0zTypLn7<>3|9Jms%5qXn#4tvDu5LTDtc4-{T)U3X zI4&u$9af-Uc5pJ_yrqoUp#fAIXFRAp2Qo1bWn;2h+q!@gn31K=6Z@UtzVf|>b6dVC zc{4y%F=sQ!`mM;h67h~01MRIuDq>ur@aLN`86_TY&zLto0?5Ey63ZTF=M5b|5)xv)Fj z`C-_#ut>|z_idxspQORKZY^=%sfG6l@Vi^pn5Y`<_!Wm=-dYBnFy6OdHxlP7H3}v* zULv=-vHSYBR8+%ArTtbK$nRQXU>s26D!`e?wBemUN38SenoIV_Uvxz@lBq?shX zonI$-<1rr{A<8bTlXndN-R8-_nC>y+)29i08Vq$e$;s~P^l~o(s<;~gwe&Q<&Qfu> z<$)OO0{e?lmdXGE68r{=)q<`pq2T}XKCyxyE-R}L2_<)l0t$5I{kGE1okLyfCQ8b)|XS3 zQ+v^ihlpFX1b$409hCkPp|heEopE1Gxl0@F`wT=iH$c0h5unX*XJQg?2%8*HnZ=Y9 z8ra4F8FKM067UiRr z!>_e#t<2I&`)afJWRxZWlX%}Tyx{MF0J4#O>Nl60Z~FvsK>2P7w%{vfG>)d9@3SfX zO@@I@0kX={rNfjR z$^~^RHHtapc}pAH%{Z|KMcGf`$^dL+!jc1ekE7Lqr)2!iMS=5lgrkz(mV=mS%x^IZ z$hVwzvXJO3E8`-#hEMQaTtD-1DMZq`t=UuV0=S=isex7d3pJRbf`pGTGI(uym*v|B zu)F`@GM*l}cQXa?FHi@~iPv!d{GxLrW1|ZM4nxM`|tC@W%4 z=YLtrLm6p1#Nb}-aTkfN0O09;3nK7a9qNZR*mzp`kJ6Wo87f6JM_bEhr*G0qrkI^v z0X}wc#`lV@&0b$8Ww#p^DF{VhB@ali=HqtO z6hBMC%$h)i%HBl$^Ku5QHUc)?G#vAF)Lb?*=%KOacJ~lWT3JQ^Ao*x=j>EjI`b-IU zs4WDbcezxh9->rueR*y?73Ecv|AW;;rh^VHwy?#QrB7C%rv{>n0bbc0fSuoAP8RXIwy0Ej`8m@S(A$5gFqaLeUgo{djUIU$w5A|4kraG+H9box zAY~l@cDlXw7jVjeB?9MV(%8@zzh(B*l_YudN=7~Oj=_Vv> zh&a3EMrcmkq4jHiuNr!)-_$=}Rz*Ie9m2Ydz)-Jp0`s7zu{}EONE-D7inx5uijTdz zc_Q!;b8R!4vXF!J-uAY-69kQS4(~Uf*dt3|I9pYg>G9#^8EW@eT-I*-m3zS)CpuEs z;fm&5HIw?2kpQTUxygg{?6Snm#jUtLOc7$0`W5KOq=U#und)M-q0M>Zphmo7OAF%7 zCQ*(Ef_j{qjLmtUN2%Q+B$a+a=XIc9d7tN_{=g?dIT&KZwa8R(W(5+x@`z3QYEb0h zD_$ZSH1|OH=WBi=UAf;^7jk=FWX2h(IJ?ZnI(Qq9|7-|UO+#l!t@mQ>6nb%eU0W;{ zVm-5oy`TD=@gO$zW&6h4H`)yqQ~52{;`M~0j-%zZsGkTPi$7>TfGMM|Q=sLpabES- zT9;4u4)&GMlaNp1Y%4}iKT29x>tEKPgT1>RmV2KgHq#V!Wq9u;tRN-t@Ul)Z^@4uB zTemFqikieoQ%h~=&30GIZJdBm|Ln@*L*wZ|>U+xp)fzFo)rqV|#|Zla0(U>QYc@0r zbe+sjZQQGlC=7i2RYWCLqJt7JG$E~p9%J5a;WkoNi4oxloK6Nt&wvyrLc6<}DIW?zR2vHAr+z$iHm))OE= zUewO+`Vp9`T+Nzc>$CWv#{@d*@BVB(&KRI`-9*}IAs`JYgoe1hGYNar{{e2%bJ4sa zYK{F2J@7D>TZ8+xURF&4FgD_;tY7KJ6ZEe_#2Fr$YG=K>*es*VNi)M03Q1jtZ~)6c2?gS`OOD=Gb^k9zQ@2l@8^1wrfOX3L+||SuPUiQ*w*3*`b_KD)SCc=gBhWmKxUSOk3(&L-cv z3|L*FGFo{l>Aw4DC08w8}gyStH)l5%U3F6nL%knZm8{4Vx>_Wgc;uv`n) zb;iUoGsnznsDhjX5Qyrg)KrL`-Y@IdEq!+Pa{I zrr*DRi!QIMxYWKS=e1@k4X!5eY4TjXwYIBQq&=-HB(v=ivV6ThrY)hz==){-Juh+3 zo&491TR*1ReJ_TYyP5h&>9}FXWoeTgQ$(Yj$y$@_CP@$+F&Mco=3av^Ki)vN?^}G->^^<&SGzd)?9M}gN&+o92=NTsI1?E{xQ=drZd{m@yG6yySZk76AuExKP@;W@nXFWc zevk;GVIaMI)Ak!RS(%bQ*1u~5cR05NKVC{#Rd72GM)6s1vmjMhS%4kA*|Y@SH?arZ@QxFuH{5l|6^Thju z;B}^Pj_Zr{%jpah!Z=1;S&e6@5IU3WJ?-xP45VhHS%fnrMr_NxXBzsys*djLfPFR+ zj~TqDAY0ZtOWK~^TEcVzb8XJ+O*T}bsbpWs@U))!^(VMsg(l2a%NqAkgw`sJm4JwFA6E#YmWxo0$W2cL3?@_a*cM>f+#Sl$B{*>qD&u-++md*5fP&; zJK6s<6(Vr)`jg71oWYQmUV-M%uG)si zf9G@x1mWo+`aH^{;@>xP-J`TIq$$-=B+q_~3!_b5ua)pAe<0cD%?qQIB#JuNtgn~g zM*h#xR>(ok^4+7YHg8k+ZE(?d+kTvZGhv3Yx_agUCmRK}ocVg5rM#bTX(<-x@$e!3 z?}omo=pQa{Y-8A6EaQf9i7k(vzd+4uund(_H@F{--=*H|E#q~nh|C}rQ2ud}uM+xi zw--`SB05&P^YGkF{}+OdKQ6(P-IjStzWKeT&*?X*D7(vN1B1ZjjLE8){`;nl64VD~ zeIMJZkK1}zL5a%ujgAruqc73u%=2yU_1K4=!u<<+ki`Eig`E;K99r8+Ywa;#*u%>^ z`#W#kjU&NRwRLzWq17Vl?b*TSb_FacQ%u}#9{<1lWTW|}nOM|-?=_yjH9cslxgj_w z7AG3k950-TcTlFMYs6_0rc2L&MS1-XSIEI8kv3^-y#(Fr>_tWF3u;R&RWw-^@O`|G z50fA=1L8{Icw0)s@RrElkq-Eb85~@}e^*b+FwXV#rLMRrN2P2yOjSv?!X&sc@Mnd- z8a1M;C$M|Wqtb+fA1j)`pIN|QU4xdeV zuZGqvN#T_}rOZFjQ2ti7MSK%DcwQ=sk0$Qp{LjFmJKtajF=SJ6CQbTtg*s?s66pT& z#dAtB4!P)^TRp`}>m__3vpfGR`adnGP%3El;SV-VzfJTapvT{+mL$DQf!r+zUoduYF@f8S}ry980@u!M-`C1rprJ5V%oRn$bgTeMN{)gs(OJ{L{ zopVVUg{A!pJku&mGz4GJeUKq_VW7sMmSxcoU0m`d_!qGN0LwyIYZ!xMvBNaRC;t(E5W&XTHdlS_|O#kpElPO%8NSzzRmD zf#q!HuDRpW6a1E&6hAZ^gl8z))7<;T_a^bw zl=M=zRqwyAwy&MTYj*7Se;NeAefR+4icxAGJ)Y#7vZfYMk872+YL5anXIP zswAe)ZiX3{;`xb=e8S+!RkjGEe@~;N>v8n|ga9>oqFTJGr+`@%LuV}P z>RqD@;L&s2iuQ&9Fi>z`UUQT986-r@-tP+^f zvCDr7hf`hT=0?jB1*I8;yCq;2vJu@pM;SU$2{mY@dSxfIDGGHKc=+QJuq6e;zRa9J7xo-qJX(R5-pny?(g`KBy8ZYFe~(r0w< z-c0-v7g%+R7?c=G+3+ggZhe1ijF8p1Qd_X91FRp{xMZGb7XZW9LcC2 z!-<6@77z3*6!JR4S7dxX)H<06S1xUxF~+o2#Ir7Ge<>rURdQV%{6l)2R^m~tLdr=m z1d~Qw?-V@>dEmPZPD+7)@FUv@O|r%PNi?m6>!eF%B9{G6yakme4PNSQhOs?=52eto zu-JJRPdZU$E11UvjwuaE3~%P%>Dy-{z-v`(0$rD+&>hu73ML2JdDiM*`JwM)Veokk zT^N4N2tvNw&mK7Uw0B!pg%viP1FI-^nLx&Gl9u99kyY6UvySm)L<5ydsYtbS5SQ~$ zkZJ1~2cz<|G|5n-qcIzTuOH)@V%_gQ{jl{hW)wD4Cu>KyJLneyeY5ZXW$b2A2Y5mj9SLhD!K@PQptuv|(7Vy4<{Rcd3>+ydWlAUX** z!Vc0U54E?=wr{Q6psb)F=6?d0^rZzmX9=YnyWZGQ{kWu%4h!>7Ad5!x4eM&t)h5)% z=D*$Ol-a=mW%FCt~h!THvfX69@6yCSW=c50cc<>k=t8|9EP zTSY=TC4f>NFnlIkTKK53&?f1?&#A%oIO-WCb4+R2%{bZJUE?!@9R0P745)y`5&c1l z@LVC*z2d}vyp*6>WK{Ve*KEd|0r z4_H>y`G}S!>nl*>S5~Gb!e%P9pEMEGm@y3>=EvC*hj%c14!Iqtu8xT8c`n|@k{Nmr zj=_O@X;DQ1Eb%BR97GlJN)JyV4#<$ykCvy0ZG**YW$`!}i`~r|LHmsgxtfaV72jji zIw2knxUlUv?e_MKK-VloW+^m|u<-6|AV^GNfSa@YNk|vwiT15P2Z{TGAxIR}n_b&} zC5jN7Jgg-tQ*z+oBego+>hLMQ$fder{z_niV=CVEhp0XVdS&J&Ki3n$Y&!`M1Lbmc ze{!_%D;g-MSI&4$_WHEJ>W9x-y_{jHrwhx$&i95DCZ@GaWkt{sjEY;D@82D=4nh5n zy%p=+SCLOcz+tjJ0jsbVZ6--}D!^7nmaVTZjuj6r1eBXKYAL^sJxnCT4%0Rdl?`Zj zct_*yRKzNQC+liUi`pFI!d? zM8W1ZLlhfJx(s@gR;nSn?af=;l8-RDr{({w(Sa7K`|ty>`6{mZ(P?W~Y63vW&Yu7w zOY4RNJHEp`1K6ZMr!?tic(QA&yXTu`r|-_{Zlnde9@^$NIkDPRb#)OprXyvMeRc3e zEI+tROP=B#algUwIO8+;EQv?q{Bxc24x@Dj$J$SMTD^FOEx8^wlBR(4&I zo!qchQd6C0J))ZmmtMK9R42A9wK7z}?pXSb_nPvJWxrF=HwMmwwmpgWr0Aa9xe_6x zG$dE0@)HA1@e=xb`vD=%fPwK37enlGQ=m;Efge{#q@g0nCjVL0_0jym?UdcvEP6Ol z4}0e|T8%=rwswyRaj+N4l++_-sQ)OM8+zuaFU4pt^v}T*(S*Z86>az!7x&?OYdQ&+ z>SBK^uT5Xp>UD5DoC~PY6i1BIrwy83Wm-Lb>PjRFUgR{!q86HT`M$p&)nny(7I)L7 z2Al|+4XiE+`|5OSF(MjdNl|RfKb9sVcyUy_4v~4frVZP>Gc0WU3rp*+z2Wk1*+Ei2 zXBZXvKEZQ)?rzDo_4jju74oykd*}nn04C)|L=0v`0)h|yKx*Fa=JNVvp4uE?26?S5 zkrQ9#h8cLKVK4Ax;RU$KxwZu(QhNuzaj&P*r=Le+0BAXa7bTgGe~=IcZ~3yG3E!DT z&Og6k*x%Wa>h)+UKo=#cl+8tzL3HJKJS2q}_=u+eVS@}~NySsy=lQ9D6hRU}=7m&1 zJBJZg?mZHlLY`Aguz3SfV4oSR0leyGHO3NiCe(E%)Yldv?i4Mo5>h)c#W2^qk&#(- z|C@k~7+OkrJGu=P8IjMje!Sdss>vw-=DP|kT^@j-NWnN0_^DxffIyN-z->oE&hEjl z-RF#{mKgS9`Psq9)4rtsn<6)>X`V!3htQ?hmIOv&?=B!>VlZe7^j5fNUlPI1Sqa-pm`Mj0WoD?OcA@-`<2pvM+!>jVq!V>u~f%kmg z9S{!o3xh8RF+lhW?RU!<42T#`I8#qkRm@uZ=hKQ)zo zjtAOzBZq!;VW__+a&wRhEE_4@MwbiZ9@h3I;0PZH5PIsxUIfsC$PDjDTFh%Sw6Z zUZcrm0*UbQL7On$8~yVBtprpSITN0FqIJp4HT-KMHP#hoZ=Op>oA7C-@~#DHyX_xr zm0TAB4%L+k4!2JRrXV@GK5v{Re@7&Q02JJF%QQO#3}stD&1~E5u{Uss-5%n}UNzs8 z#FtP}ez*J4GFubwyM$Hfdwy`w1U72q?+S%BNd^A)aoN&L|7|fNk?psw`SUBe8qJ@6 zG{Wrxm~oF9lYpNQ`+j0pVpxM@{#zNU*||c#%hVfUUR9_73`u5T0w~?J1Oc~?09G`4 zWMDesc|7g-O4Z)h?%!+Dw}{pzrpAcxWOULv)Q7W6t10u5b@xkUMaNGE%cnW*MKtIE zAkcOe!DSg5&MZZR&mn+YbqUznoNO(Er)dWVIF`P(1!-YErzYiQ63TZ>E9b{w8EC>) z-$uMc*dA%SaCx*b0~16rP~3EaV1ZRYsUb4VD&QO>orJsNP*>o=qu57(UQyWmXz~pH zo;p)dwTe%SjLgm8uSFAWXoS;$T;srgs|YAot#F~}{_}1*rtg;X?%>Y9K-A=v>%Uo6 ze?L`5H#=))1zA2neasu9XR12&a_*}PN{BAbpd!x%6A2KyIr?3d3dG;-^nVp4%me1+ ztvU{?O$>GlOZ}~kJbQK(jxR-?;MQY4&s8%&FcCCDc01aUjJF`nkuTQotSo@7%<38gxgyw>Ygr@K@{ z1d+IZ^M;)aw->+724myOMj~kgaY0xP2<8rI^Js9BB%NAP#K#Tw?APPZ>)DR(d>lNl z80pEcT;B{W8bl}PM!ZXkDsW|AvDcH&ll!`Q3fUVjzKdy!;RCq@7#W4|s~D@2aFZ3e zZ~`C`Qv-_R!!_Fl_BX-Oe9{rplB*ZCd;h|vUqVCSSBf<75_;o^PG?nH0+lvGK7&Eg z+%SNte3lZ1!mCic>=Gvf3xoWKI1xBdX3cz>yOT|}=a?UFCDX$1>Db-uiqj6iMzMZ; zXuFf#1fU(hU{s;~qDPp5z4?2Rl?I}vbZ_^gZYKl^huG7|h{%quzc&&UYN)ztr-Ek( z4QJaW#;ztBP+=4FC~;Mjb)*S)_+l9&RA|_-!=Qd>eV3?&CEy6AR%_H{0VHG za?!IYDa-+$mE?Im=b;Jn6irv+tkQ8~dto+kCf&}eQ%{DEGp&Q`;PL5vz3*`1r@%iI zdI`F4pO~}hJOo^k1c>x2?!u!`+UDF7-VL-u8y5b?J-vIwmL;bj-UXB(v!P#-rmF2V zqFJ>y^1tC0NZ-K0RAlQwsKLVE(r}$4t3R0x=gIJk`cn(*+7fX)QO8RHbJ#5}sqvwj zcGOsuUmh8b4ky>s5_>zp0!}Cl-V5BFGj4w1Kaf?Zs>)crj&x}xU7d5SO5LsGd zNtE3^tqM}&NK!!q*qtf#^@~XU&_5}xq_7O|?H@Erd}z4VMB#+Ai~38+baZC({17IU z4Owxof{madJ(-LXox(*_ISeBjR87QW~!J|1a zNWWr2Aw4Wlj|wJa1*oNrdbPZSH<5C}2eY5>YfCDOd+tB=504|?Fn$w<{g+?Jf(GtC zqOPR3#|LK6%^-JDy%<)7lhTnup8 zi1V*zN?b?v%lc1z$`uiScf&-c zq%1=5C*e$@m{08~Q8;R`xANWmMYj zzlIj*@A4$L;rn%nzpBmQUD4og*pX6U(c2-vyY{=EJhu5zw*f_?C7g!zY|o!ToQhs; zfbyv=>K0SKB*TwJ{I=10vT|}{lO!2YYTf-EFco5A-!M2Zaw~tx5V+1iA&vQ#k5(TkWmTa>>O7gC_)IK-akY|Ylmo`VGL^e5w zB*qE}g}eIR=^F%O&onxdk~Aarsc%+GzgP$9N5;RnsIrJiMs<)n&B#(!bIf)ZL*cFk z4%}-u<<;=$y^{*Qc)(hAJve48(BO7~(<{|O2NxLxK6~Gq*9P!Y-TA0kRo8aWk`&k%lMT{VURbEVW~Q5 z<{bc5@NJl$$&M0Bg-c7#KR+)cVc-NQVvXP{LrvLuWcWLK;RNVU5+`t^+{%{lI7e?7KUKZctLtv;H3g^I5u5$N`V9}qbg)kNd>{P8PU zH07|iWX{NXCVRoGmXM2W=YmIZekShnrj53An=<|QNcTO%3N|GCC?=MPZcWyI@!ocL zHycPa132NN@~9vg%6r|>8Z6DklIu&b{NAN*2K3nC@?H2{BDlTPw&;hf^b#H!=%$8c z-qCUnvx5tHGr_|qN{2VAS@oVkmW@2$?AtqGMeEp<^@WGMsF%kwrGwSUc4lr;G-fwI zHQ3LXTi})yo!&40TjuI}RkZ_J6V@quiF-*3X_O=lBAwMJPg4N6Oz8GC;OI;uGs#|P z5;ouAK^5l*V!+2!Nw8V`DVb!c7325AHB&9Aq%Uplbf4YonER!Q$yTP-EuH;H-7+oO zxuxz`Z*f7~$mxOoz4;XOM=UWI1etGGsGl`8qXZQAc@1aqetdHm8RoGiORaW$S?}Z1 zywNmv5wvVt9%7_V3lArcV0Zc08trJA;xyDmL**d-7&HWAvn?Fa$AAT}fZa!JZYp#3 zX881u5MJ&*H90W>=*WTV=STgfxa7{d<0WWYpBWw8%Mexb_b!d_(>optD&s5e!F2}n z;ji4M@R?cJvb;n9vG%yyzj&hY}P_XctYMaoz94!quIQXp1V99 zykezsG!09vf&RkP9cF=79PaK1&c|~z_$taaJDlFD{vO$sTd&+u+ohDnEVi=ow^>DD zFLHwqDDYm8QLJ;QVB9TVg?wOsH?G*{-K1f9rKb{_Q7pvUc`sjFdk_%+;%_$iu)#W1 zxR)~P;>R__SG>lWOTXrhs)AghLSY71Hm7E#T+MELkB^WY7kAb(cRgH|NX<)Z^79PBO7Fh|Wr4>+__Mho|vMwe0iw ztLfLdt%i5XB);Hq`uF)(atSQjCS$O>efN@5f#F{LrYY!nivB+l@ z`RvuhaRl~YkTfZF!(V?z8v?9#u_9C`CvDsLM#v`H8<-6L*zvR~*5?~SbqCUvPm2#Y z9Is4)SY^%0?E9+&f&v|s$JzF^rNn!~A#HTn8TR~IfNSgUOqP#<$+j0beL{H3Cx#|twJye**^VH-a=>Uc8kx%t`j`H_uO4;|4@JBzv&8iqdxxL z!dG;4gFXI*OI1Y^RBR_$pPKcz?mf7qpst?XW=Si@=4iIYoI$JcxN?0ARo}D^!A6L` z0+~v5*6g}HZmd1GG&?BX$^XeTUweK5_Ps!|_o76)EC_S-^3llbZN=)6IYd&*^E=K6 z*&Y2jt)Bx+ULx%I0y9=J+m-(hdjBeIxhda@D4y4@=F#926=^<{0i`W|zL%+q=7X;l z4aoHJnHeKZ>e@CK0sLonagx(q91rHvZDLxX$zPjWmh1>c0g@|}*2Tdo+sOvjv&*<}JMEd@;g)~m*Z+|h3yPVOG4)%_f&M%9=nH8uk< zl3O1z903L$2jY`wRz_pvy{1eetV>mZ^@GO{NotOb2>+h4__+E~^V$?zkc5+nu*@Co z2Y_;KC)fOb;y`<6*M_67cB+?z_JsB9Qu9!ZItz7dLTIAIu`Z=@-!t+onnbARf!XVe znth+M_;yB6sCcnjiH3}u-7?!s<=Lh%0+yUZV*KFn7gM2y2FBm?SPrGwXJjU6W&=Wn z_jlTUYUmZ-1l&>CXw0jHA=#%zES2Hq&OKPHwE@A50Y#~(?U-^yl|2_`n?a40#sW5S z&)t;`8qA|*zF>}CW9hL?ou0f~Ukf|XnA%H~VA;e7ovUo^ZoSS4Up zr4)-rrUJ4w;5#wI`P3>S19EPM6hY?z4bqQyFRD%W-2TCh#3O}o$JUb5WO8`&Pdz2S z49%Fc2D-CdeJgM8m5-EwB@FB_T^j~%}9 zNO@l}sdf<-=FJY(jTOW0rb1TqX;r>T>6WylcIpr_KDBoRqTnt*iW}s~t(-uNm#sm< zX1YDLNye}G!c`?Aaz`E<&hIZ*b2_FXIapWAFMR8W;^8DKU>-{0&;JdcqYpKj*6A0X zUyukj+&|a9_Ll5q-@P*`!Jtwj;(K}BF5g&Bz+GZDTH$yacTzenQ!6{9`7I#QCJHRE zC6DPS%4hAfacDw_vqz}{F98K$UXeD4VdBCdyAkga&+}tLSyW(bfY2!KWGYOV!DNB5m?W z=}bkp?kLM_ug9qrj-W3y={nj&d3XiN$(4(w-IzTeXbT>09xnUxtcJ~$I{H$IAZnGR zSx1LVRD@ZK(V=sqc9&AMJ{_~Y&=s>UflX~y#sy4$_Qe0JPoJK$A#he96mfU>NQU_| zU2Og7>$NYiw|kWo^`;Eu`hGzW`}S+QT4sIn`rjQ_CBePgjJmZnJMUKMjjnZtr$n>8 z$|Xl9eh1Pm?_c^`@;;Jf@m1l$=j(#wV<>i-8*V1s?GK7jaO1Q?QZ0%tb8_X+bY-MZ z;`{pxV|iUDlWSskoEQ{Sx{3RL9W2FVUg1`KGN`X+Q6!#$Y zlricyc@uSC8~2yHn?&bhF?q|{`1lO#TjA%k2*()75e1$z23rr8{@@6hY3{EXg&rLF^GJxAV#i zGr^{`o4O!cR{w&2*`PFvcY?K~TXdztYG++jzQI#lM^hw&P=93zF^`B`|91y;Y$aT2 zIBtoFC(8%4CPZ@K&p_tRDmP!k2&WPZ&WCA@_+-AF*4b5h&z~VJ?+V+-0SL8E)xa9e z46!D`mvm-u_N)tXLarm$d9qAZN2=herASn!R9}a75c}H+hQLDO9#_hk+$x@@Q^Cr3 zD6DFamWW^|Sc5SgYR4Hu-^UHTeqbuXZ}B~KcMUm3D;nRVHR9#rZ1F7mTZT!tvSg($ zF|WTl;pryCw3zv90>9A}t>xv?*-61kiqJqxaAWyp+iw9*oJL_oBj~W}!$mc--9A25 zH4>Hhqs^`u#{=Wzq@iJ7C@;rLih=X~KF>;eN=Em#`s?w(vzovW+47uCiOoV22BlfJ%qJa4Ze`%gJ<^7d4FEPv&K1F@*ZKe#+O@>ww3_o8eJR$4VsTug;*9k{v|ee6aY zo))WQ)UOA6TlquFd0tyGth#ESPhWsmG+gl;!om9Id7BW+Qm@SWTg@)Y)1G&Azo7~D zL7u@AkSN4ywYQ1p(gkZx*lo?ykWa)4FkY3m-Y<3i zPCvLee1)bt6?!GNb$4B@AR03yqJMh_`HWClX*RCtL zio4b)^6)&2d)6Sj+(I2Zhkv`%6n+beESOy&cQjwz)!EIx?HI^>QK2Y}D~QAW6NZfxFpj$Dg8b;9DKQ|8Fql|MXroq69d8H zAH|QT{!kD3__%I3n>&kmR1}JZ0p7Esb(V6x1`~NTxfTd-fUm*?IB(9crGhh&+6O7x zr2dB2iO9N)v0bW#>~ai@k>T%a{rKsMibJ#4WM2x@hS+NnTDXSGPxKdKa~tgCjQRI# z$Q~{sFA*y_W@F(LK2(H@wjt;YVk^g=w$mG)VX121{NgkvdFpdDy zl07F}PZ4&Ws-rc{(oTJO()OBdZZ2OFunx1MKDo^T<~dqENHqKMmr)p&=x1^$Da&v_ zWqTwH4!h}#5hSUP(#quxaNCVSw!=aJrui6F;q)f-7#p%8qa?#Rs&e^fLz){RbwI~J ztD$0bUOtTiIp3UzrE08ir$`)Vntc1(Blisz%(B%jocwV+ls09{m9+hXjS@=}URfew)u?TMV3L$EqF(lZL>!?jK^QlSCb z>HshtF(2!;VVywFOAcX_I)`R%wRL4|{bG+ESXL9JM%$|V^PPJ9>9owD@Qw1S14jzEs`_%=x)8F>wYy(-FbnTm+3eb+lW4c@Uw_6TOqB8>`m@W=vwF zYpXasj-NB-u8%L{q+e#N5LJ`J!@FI}i=uz!_LQa|{yl-dacXFr(CYg*V=L|~-s>&t zi~i0k_%94neGNZP=GnHj6|BE2^lt0O{X|wzKZhA9nK;8gBjqJyJWBjaZHZS>bbKoM zV26c^GVSI3lU^NBbG=wsh_JnXEA2=fMtPo9`iE}*2lw0Bf}f%AQU~!!VLtM*28xrD zeP2%e2@Sl_4UOj~L4aKu`lgmB%lHqbCc2Q^-8DS6it6uG<<=Q5B|4dm);_zTYJtFGl$=@H+tces*$W9etfVDpoz@(c`sEv1g6nLEomW8>}l@OJI#t!#cElf>3csDx_Ju zs$MYuwqU|z$1hTpKg3l7AFlNLs(?Bf?-8t-e*D1POoqIO(wEU)%0SKFBi zzx|!TZe>1?spAts&R@Y5OH&Uz8TkR@ai>9GzJWFn^wWnt&F4*Iw`Ud+g{{@0E7b{E zp6#g)7&gfyMh#&x5ujnWGW!7Nk|hM-gTy=MUO~Swg}ACOzDsJC7$Q(${ja$ z!9${o5V^VPj|pB|jQrBqBn7t|>XzobPMmpz$_eu?Yulu9w?`*T14rzLnPEnE{#0-+(R}S2?jy{$$NR zb)tKC^vIc+>;^Ayk&!A4VOdtOfbkRzbfX0e##y*_U{-d~3bx)MDoXxDSb(aIoQfh*ueOIoyZfB&FI?c@6{3B z5kedZ8r8W-NCQ3;V3L~puRv-nPteC{l2k)zi)IAyo1rYntXP(i9k4RWme6IWX{{Wf)Umrv>z6Aw+qe|)qrccN5FcEH1?2igVZ7D7ggu?}Tu#~R)^Uat`oxPe(_B66jp z-O2N&cXab)KDk|w!*TUMr^-`yBs?w<0FU$XC?`6eb%#;bMSur3YkJ>oxoyYEgwzj` z_WJf!ZE#CjULDS2x^O#jH^%3nN2a6Nx>K2YrZED6I{Mh!-S|@uTlE?=^_N*vIw&#t zazAFp!=agZqk_K*$5lEBhQT_%Jv{Mf8U(zi?p5{b;JaLkVeokSEI`(E?RMq#*-4U5 zs5Ac|QLVvm&SK=6S%}0ydum6jSAyKi#hEbbp|^1ki_uV9l2YU@j9&HvfX(U99iu+5 zK8rk$B_izw=;Z=B-Ld`k)T@p1V>@aW>UBfjG|_O#XNI)6#Gs!k-CTZcJ(9Exv3=Q2 zq3jr~C3c6@a^MPMT%J_|?vMTbCt+sc~G`U~md zdfMm|eUYyOV`gq|J2t6(`?`prq~snLhX*Fw)s_f#nkWUh=!(|K%VZ;!8i`scF6S&2 zc%ZzzmloRaFb$a7#hsm1U3ocsi2I?_Y@@N+rD|1TwD-ZBSg$NFr!X7ORF*fkBWGjv zqDwQkO6EEU^{`%g2a!ow3PG>4X)!Voi4r5iwiOmzu2QpaM}j?`jRs~Ht2aA1`-b!5 zLunuq-VxlpERA!WRAmXpVucgukVG zCT`@9;=ke7NaY_(uU)-6y(PoCTp5i(hA3Pf+CML0wN6I}vd3835gvY&Li59+|IwoT zYq+xt0&ViW2u_(~yKP^``!m%3THN5$HA7}k=w~}Lnf># z8rkn*8=;dK(tpgF30iMyiZ(`LffM1^smkia(etxftzPuOqLV*D5L5}qeBMMJR>&vc?VejG zm>2FM&RV!R1c!v>NLr(P+u!?>Yp&6b4;y{MTWmz`u5Xdy*QaZJ_^B!Ci4xwBV3EGS z3y3pemadb!KOa>R;paXRrYNgINs#9u=c^{jG8Gm)J#sr3jXiL@j!MX1Zt18>ugT-# zM^-w6?}FR+ExOx!e`BgHH6~s}U?R_&0C4Kh-byLL!babj*#;(8Xa0iV;&iYm=HIqr zt~C>4N4f>o^xcFT2F-yW(bJ50JTfL@L92mtvB1r0ZA#0;>Hsa*P z7^wYo=>~zv)spP`x#z&cU^~}w5oPKelFiJAiOg=wK>Z!#jX$1+lgE2BR+_-#m@Ne)u zzAq)cn$)mVZdbqRsNs33Q?GEYb>Y9}>PgolrBtNN&Q zvrR)czyuXvT&1jVzL-(GJBM#`=f_B=e(yv2YNoI2cVWPwqXgh)`&$)%6Gjb43HMev zV6dunDXmex!}uQTrR`gAF2d{k8@D;{6BVMIhRUg9pFg-dxU1f%(}Yezj%iqiT8@0N zKK8enGzDO4Xa;j9e&amn%Em9+`MW|K$b=fr%u8tZpa812Ewc<+( z_!#c@W@}A?Zm0F)aaiY_iSzdB11o1TZZSu2g95i8Ff8A!)d2J!_r^^l`>^wSc)ZgT zL;+!%!%lLy&|y2qP_QjD&bijjrE2OKG}BxxOK>o$b{nBFYXY@(0Fr(|M&owlJ=>Y! zU^p`AxHdR^dTp}kAF+w?j!9E>BZg+abLrW*26KiD{|1{0RI3BrMrybFLXHkH{;5AF zDm%m#%hU%(vx#4MRihTuhf2>K;nKzpjcCi)Dq8$z+zuJvL3zZ% z8pu(2Wb!5Px%+W;XLo-=&+5%AV5la-VFjP*se)nR8s^K@o^-GB${D0d1b=Bu>p?c! zd)(XL zz&}+yUEVpQ`kr3FSN*J^!56QqLEv5ao84uxcoZz+l}l7j(v%lNZomLne;EnB7AsI^ zGD6E#rv-o#W>3+19GDAw+iy<{AR(65Zl9UbIRG9=?al>;pb`50!Gs@}g9HC!?MXC- zgv(k_DTol|RZ_(;@nglCYMhh1H z`DkEP(EEtIVJNqa%ZEH8^@DQ!p_o;Kuk*cTOQHAv%9h7sP4yN)Zh!5bey_wNNf-mW zd7OTL;-lvklx1MhD-T8svWP1Il#bQ7zmveByZ1LC1x<6(&N~Ibq5c zSqT#OY1y^~4BA?n8DkUGg{HLFf8}acug%{?eS;jqtsj)69SLR!2r~;~|1rgOIaFV= z^x?nRxrIxi<(=j^i(X~Q2%o7;hU;xjrfgc}V>9+DDlDjzL8i0&)G`FUTETwq2f4ICZBnY^_%VX&^15J%@ske*c z#)98`Xr3%wH?in69DqMK_PaaX2Q*lT*Cw!BDSju}m%V3G151_T;9z0#JA`&HeVsnBGz@3cX-2b=>s(j;Vg*51)kgtfYs+U) zrg=*PnKJ{5hKv}vqiLY_QF@*DceSL=dZC-!rQTWCbC2p@^Qs~u9EVJ=Im1e*+}qv- zAOxffsLM8#c8Uu8`^4q(aEm&=okOv@#1ChYr<~hxN$1aymH8XLvFs`GW@d8i6wfDl zMA~6Kc%^=l*8D;w+jsQ0RNp)swIMeBROO^VQ+Q zwU1u)k{GU6biFb>L?3~chC9**j}h^i-0hEiGdd2HcNn>v)ar(^V%+tWo=im_Z~rZD zF#LhBz-SM9ed_sKJ0pFL&48p20d5}+NUxnNxSYfiQ2<(;cpI)ru$(xK_Tv%XTebf+ zHhBA?m?sO{-%6C5>r(>;)Ms3u-7VULZ{sQ~xdMyU>)nmpYI;ud4cm0+AC~?Lp&70k?8Z9D2r5Fh_{jm(CCN6Xw_YD9ZjX=` zGYeKYJlJa_*T==ujz1K}{{mifehtZ za^+L2Ngmz0rXb z51mLHsA{Vt&i=N`eU-2nY2e!V+%d8+!5gdXu>CW%DE> z#)yt&kfV77$XMGnVe@-bV|h5!kkjO1Q$3KL^TKJ@Pe*4MiCB6zXf z$7180`S@trSUD@-649DG6?n57UETaiGW@2CYv+9yAXis1TKBF$u$Cx(M0zMns`J1( z+Hxe~rd)u`-0^qxP2Nk)_3xKZ4BkcXje959Dn(u>xK+zsWDyQan~H*l_9)7y8@)iB zugx0~wKMP2p{=66!VyLXkQ0m@fBNk?sB#vnuYb3Ngx~my8jxwf4YZ}D;O8(bQ z5wcO5SI_W~+X`%m6y9Cf7|r*BTCKP3i;m_6U^YoZi3tFL$d z458@PhUIA*^L+h|-XgV2;WBodUhEK=yq21(u516&sGobaVA150oZfdqY#G?_4L`%F zzu6ne=L~R#V-YtDJQ?1VMAcSxP~8{JWFQG%p?<F4ZWxVahbg?M|lk&+% zj$bHrxmcwugv9#II8$$JIx&JE^M-*Y;nR)Dl(CZzY>=a!SZCl@I@^iwE!JOb#AA}e zr6A~a(SV2E%#Hm#>F!Q~STfx_|22HHFtDa3_=H2&L3;&H^}MHa&`X)_j&8DwB@7qp zLfmb)t~Re8AqgJj)(fwa+;`rJ;;7BaG=tCsGCmD;R~@3-oU@JFcDRp;le;>>TXe4G z{?zG53-Veb6bDgFv5r8tI3|#WOEmB=Yyppv`+AekVBEUeH)f1wAS{_m22g>H@G4R0 z-WnArQb4ycNsTe3yGsrs#Yf%v(-w|{OBh7adu0O?bS+jO>s}b_D+ll+Ghvw777nY- zGC4PL3_5xV62ImbFvH;?B*YkxfN2UoJq+|gcD1|OBAdT75l1=#eV#kv`nk3bt!ENV z4J)7U27v!IV7$?hl%(&suN7@WbQb%$G0fBCh%v#8Z9UgXe@O2pJKZ^m5)aZ-W>FF% zs=fxHRb=+li`pJw#A5wkgW4kdcR8;r^U8B@oh*8a;v}`}2a);tH4Zy!m<(lw5$yZB zPl7AztbaA5k2JWZyWQE9Y4AcNd_)!<*EOC;NYzwA$|}JyaRNsVA81%sI@Yw{^i`yIqgqbd!+ALC^I`=I zyEBVh5RpoVsBfQ?D^XtA4mPLYa8HV}|?P&=sz)$^RK3D27{*Z)3H z5k^VeD2F)H3&eYSKthtif z!eWpIset0porkY=XJZC008jxcBbkLtE#05sWH(?nwzV7u{wT`6pTl;llQ#9<97tmdn!>B2};QC z&fJufql^m|A2c1nU;$>OpXqnQiv|>JWVx@vl8)h;akjo##eMq zYppRyT+vjZkA7yl7o-m|fp>{Ivcw#n4I32!jN>KyUM(p&3sAYKgP3CyPM8k1YmiS_ z2a~C@N$5(Qi7_}4LF@wHfz{Iy$63heak?E)XB1({&A4H$Gxdi+4jXx$9%HdebY4!} z6SR1M9g`x_-0-v{Z=52n(`c|g>8+U~bML`@?zvL8&GZI_1&>tp>Yn7Zzi6qT<(G~~ zQY|4G>OSr61(Nx7BE!a>I#z~5&1?JXP62?tlc`U)e~-5hSW-1>FU~JM0ftLMLB%u5 zvh1_K(<&ba2NwdqY>&3Fne_mzT!!VC3k6^Mz`^&>Pv{Syx~Db5qBV}0o7CsXQUP%! zROI#oWLQ(3$d}&UgjpU5L`Rcma6eW8?P^9!)O;uhRwxWCG zc9qCCVF?v~r?l8*kyLxWpNbmxo+l@BmsR)4r?FO}12G*Wr*lKQwaOP(!dCD0r+-ty zs~quPC>%Ddf>B9O?*&YKa&hQnvuA+N{D+#MJ>AK0>+b`-KHmvuH6=9}_j-Av{=wqc zKc8RRfVAiMbjrVV_Gm*oWpHFCn56goCr;1r8QfS`zsOq=@Y3A;*$#>^TP2Xk4@7=L ziRQGw8voI*gvdwyO@YUmLdz4O13-_rx;a!lQ$WR3Ynk2s72o31un)0l*2y3c_}Ulq zGagl=KiY{9%v`KDu;o4sN$~l6|Eg@sh$j-!UWay! z!PtL7YI&f8lC_W+m?5uCv@3Gb8g#Q7 zR`#MFfsefrKy4me~&U^uxjJVxid@?%nE9qblv!-7hszf7Qq z{&%Ol3WW+r6B>`b(Z;CBs#~SBy{!%naq^XK^#o;@D`fPo_l=h?9g!RX?_~WW8#!~{ zVV?u(AEU|8yWasjzORwEX{~Ap0BZj$WqW&s@o$h48=g|?3VMg6i>`kL>144a;`>e< zJ_LhrF*1nbeBIqcpUJ*5NaF}{XWSK&;m}>NJ4uB)U;Xj0T-%0enjQ}TqZnI{2DM&> z6OM&PDp39UP9;w0yOnUUwvn?!C+<%L2tyyq|f*#hlf_pI6GE66IQ|72(522GU;7H8(qXY3lPfWxy-tC>nQ^zLzyYtwc()h^DbPsyj#Ik31+Jg z8O~cJHD>DdNQfbK=sKjode)@41e5&~CEOqN=lHg$HaY!C0P6@3Ow@8jRKpQ>R1Ypg z3;)m_`EfBd_S8f@bc`S)sO`V}!S`mSlXhJj8Q+OPumX=k2f-T%peTjrP!*L@yf8tOsLglVTZx65LEIQbVb zr&(3V3N?S75LfNXp~P`FB5yBLN+BPM~4><*Ovx4nsjI7K@; zff8#8!FY$R$%fAt#vhyE@UD;D-&$wC~OO<*!?8 ziC@J-XkL#gpsN1tt>KRr8kuIoLavtg?ZD;3;T48jzWDFWF&S%X9Ym}3=j>=dzvT)e z2k{ZkekQ*-yJnpugZDzb^tC`}4`(Lee=Q^6tn_VcJMiUcx_KcO~UgyoMP4=pApn*Cyy!mGoGoit}u zWU0+uiV%8K)n`4TRcPE}yoJi1cf_!1P|6J;j~$ zPY>Q?wk1)Tajw5xjsM%jWyi9H;Y^go9M>Kt>Atd-txs|LswKu?@H!#9YgQV8-)6|k z;j3*$=|S?|`is4x2+XpZB1NX=ws|(WtpYdKoyh#`{JU6jW?{FX&r~m48yBNpoDmF- z5*HxJFZw6c^Q7w6TC5$J8H$sat#rG~_Gz#1xusvH1Yi$M%(cGGlx>fb3g_i6Q`LX}=)tsuh&B87 zzOp9ZX9}*M?PAlL@gGWEN>)g*CS9MQ#J1TRu7no8FYlwuOMK{2b|igVw>K`*sQ&rN z!RmKLMn7X>p>EnPoVtBGERULIB5MzvBS-aBq`wK%;k2wDmEOA)~4?<7t|-?WqC zfK^Odkz;g97bd2n2zJL%T#vBEP5h&H4G)g&?QqkO8z}WCEIB|cd%s@LWxPLYMWw$ODa(i(%W_f?=mMwCby@V4eZdx|ue zq}{w+9f_=z{2oPF-1-)8Tk7aDpV%+NRH{3NPJ~QLT6$erAoqyfzVEiIC=5lK~156DL$1ymRpkUPzE%#|1 zh6#^XjnU*=7He+6TsdC-72*#m202I%AUXqzpK8Kf@K-&<-KZs_@GD0$YAVM74De%Oe*1-O-{A^(*;AZ}Pv+y=FZe-JU<(M|BAq>& z_)?ZvN=zC)r`>BlNbqys5l3GAc+m?;KgBXAB)+C|gxsJ@wD4McNss4EIq~{U(V_~G z19X0(Psf-jz}%RY`sKNk`~zGzMsP#Wl+Ax#g3p>lSk#ia8bLbJQ_t5Io|VrrIlvO-{55JY34+@me3i;hv%PqT!5qNuNBKQv z8XhMYr@+00>$7!-;MLJ1Hh)oRMH}Rl5X=_Bjieg(3rX(%cRCOJ`7N117|F=Zkqj@M zAVs)G(4LXt=W;}=8jQF8AdV{5YaIu7bY+w@eYfyCky08>60v4Ba#b6d!N|RdtwpI# zY}jrL&N6z*3;E0Gx4P$ub-z9S7D_(26c;%ESb0CeKl4;+)tqg3v;D4DU~jkbb^ORFjJ<&`~9#x`#srBV4YU^4Ag6 znBs`j`N1Y+0SoB<)*CtrjK1zP4_b7LxIV|Te{>|-LjS4%bo?M^^t-8)aI`nB`z-VA zF*BVow*|{Sp^p3OoqZfFNmU1a#<63ruHT--N7b?wo-Lz(bwtI~^^0H4J<8NMhf#Ps znOnZ>kEc?fnX+3LxH*BspRmi-7R(bwKG22}UF!@AIVD(HP~c77B$%djAAu#!J-d0S z$=u4rzKV+NRdVluuh7QE+)3V-Oy8^96K0Kct%RocM7H&}KjQGqf+9`Zs5n2gnqT<#+Z(U3-MG-G9Zi_>*9>-#Ozia+?sAx)bR9mA{r>)t z7e=Np9B^t;OSlN#rfI z>%FsP!lS-Kxs0D@8ypcX_=(TG$t$fbziVsp7I)lC>`j3u;3x4#+&04o8y!s|c)*li zFRzs>NSdYW{|@^?!8uE(=`h&bv>~EX$(9!DB`_TMngG69oO>L7UNXQ zxTm+7y49iyh^rI^j=tW?sc#EX9l1o>Uy0WvILykS!QtcNU)`uo3)*xK2TR+@iJ+)f z-9VB(XyQ+y3O$)fw5=cq@5joC`R<1-Z7qE;+1Dx?@A+3mOlqkwxw!%EyQ75iyA&Wi zTQNcF*Pq2{QD{nI$>~^aV$6?37wR_sG089(^<0e!U=qkh#wu(Q;I(7MOsy>lEz#q0 z%Ig)5Ka%lo7n6UDkBe9=1!UJw_a)%};(8g^MhN_{ek$~O7>~B~&dRO|*S>w#iU~+} ztsUruM|NA8KL#1kS;r|>$0IA?5mS4Ia~(Fv;Jx)Fnj4qit5CWge;rwn&`7v%?~YFa z*)~Qn0#QB#PJUJvabnP^=+8*VW=VutEayM;_l=*cjF?hs(WsuDTULz2va z(nZV+JYKMmT)KqzXaSVdY{i7}3VpilVwS6uy{lCWV!Sa-p`nmdO8#jKio)y{nhro^ zG&qd@Eyh|C=+E}i>tQMg(y_N1#n$_>=No)A&;!pP|DBst9>$O(oJP*^p5pY9Jbjj* z2v6`0YJ*d;UwBEb_GU(8!6v>on8Js@F>>U@(v&4vL@xu9PnhicrjOh)X8gpHdN89i zV0_V)hTI>g7Paoed@?^(U#shcAWfPIOPX?9sOZwNrXD!>j)UjxRk0i3f`6q|=zS+2 z_B9>~1t#w8T~jdIEYThn>@@Dv01HEixf#)|^K;B3TgAi(z(^S)#_=!3f2qWP8t9id z&RSh(yd{;NKHVSU>PW&|3O*Ls8PE(X*E9>a?jUsSfS>t@R+>=PfP^U59akx|IK$AM zqC!^csHUoR!P)=yFYX+1x=x@888j+FDzhO4Hh|>czJ*U@5!KZ`{b+vVh6%Ev_c}s* z-9^Q|$FqL<$3GRH{paQq#y7=JmXkbvf}`g8X;bR1JS*#~uezk=iE(ki6lB+-Q}myU zo6h?Cjc~`J3T0e~tJ)0!hP8{kYTT6nt$Vnfr>)s7tn;iKRHt$1%*W`C1kClo>o-7p z=_2I4=sseZ-)w7m5a$d0=M4ztXO_>nS~jwSU3{{No<$((&Yn14Ef71{qW9A2W20m* zj#7oq1mQN8I#!&PtS}AG%%Sn*<;+dr5785x!+56?;NoDpp1{qS0!tjl)kd$;%nR7c zjD;u1iqDsL3P$j6d43GRog!QSg)e1l+Kj~ zC)^_iYpz(+MDfEilhq?+5q8I>5)5R9C-%z(RFYTM|^ih|6S#y{dss3+cr{Svcy+7?iJ+{jRw(-(|s!jj%}c%y;&V zr9v7wAKK9Lk8If$(1s*b^Bh=5|CDdyTA-)R(s@PnnY?VNUzEh#?X1@k9i_{mW|odY zw@1H@TeP@zM9TAwQXxr66FyD239sO;C)v7{63&3_Uf6NJUlSZ+J-u4ktFFScDgAWT zP>3!b+DWN9HNVCKp6FV~U6rYAE^|10MUW2ySWO*|nu=Vc#LRGdD8n8wxt7lR8Yoyp zJ_dbw^@>W6F<@$!h46$^6OqF3I>+tnr0>j~udl9;s@yM@!T6&De?+p-jax1j$wDU* zaWAXkkkgnr1PmYf=I)9*FeAvV)A?iFlg)do&le8|PPPWp>I4%d=86-TSg#D8!nhly zj@^Mw3%%_~6+001tLe2wezB5&;w`_IE4^Aw_cn7bh>~A1&isf_I^8?(DGilSsW<95 zW3JMBH(kFQnt+i+^l&X^fVrIQ;_VIQ#@<0s4Br9;SeHaqcFHeA#+!&6 zzJ1ITGJNOx%Qp568jL9vJ;4MAH$!EmQi^!&(7nQ~ z#d|q_#g)o!ia@fvudBl^y_73ON8s)?qN~DjDSO)~FRhN&UxBxD{ad>>X*G;LjQ!&D z9B)eDaOmvmHB%WqNz`C~r@({}--#RpGI&b?^jLAY1d8={(uWtL2^GW2EOE8iMY%HO zwLDvY#V#3?Iw_dyr(#p8Wn@{u4I*)qK638emXm6w#a);7EbJJ&f%E{}FQ?TSJ`pzN z+G>iQEF2mvi`e$1qqTQ%VY74r&vjh4?W;aAk5g(~jN!R~PQo97KDN0{c?UP}=v8?1 zvS4^>d(G-3MR}ShAv%1y{G{~Wm<*x{5~_EZ2&G9%v^RABNxGFi`5H#kAaSJglQ+d{ zsVQp_yhhX*uzjO`9Q6|mOsPmLk(aPp=`Ld*=t#$$X33);`tr*R*<#}To60T20UlYE z!t>rQadconh91Hg1T(~iQ7y=%`e?s+CE~K3WA-Cw>=rY(<(a#Y5d$g+hd;72mgm~T z(JF?)ZiLw6&jBMUN(_r3!{Asm!lsJahwHW^y-jIJmNk>i6Lh28(X@}BiYVL4L(I=z z3*AoSJ6LpI=5&e0RK=yz^e@xpcK8f?c)n(l1F6zcK)ud(SWL1d5+tfuBKmV&z5l4X z3A!#+_AFOeUF$XjFy|aSqkX!Y^J$PTF|DFl9vzNtwc$!3J=grp?g~G(l%;4sRCS<# zfuX-soe#TNplxC=z>vJxBJGfu$3-X-g!1K9?JI#)Y#0Wq#`2jeA(Z6!c!>7$;JW}g z6mM5fvGeI~tzIDgAc!p1hYG2U!QY8kmDMpB=_l4<`uHrdB(Z2ZNF)?HoY6Zvj2*UT;z-{ zERGB8v`CAJ9wc#B2$VBSr^}<6-MVah5%$R4y%6;5EM<;BavyS6&`Oq(GK$FvQ>uqm9?1#b1EeY6jn4 zY8jrcrex?q{|&a> zhRDtsPrC8?aQH_&jZACF%i0i=Ohwrnyb9YFm9t+C%s)CG-IDfVU7l0yk~>hJm~U*7 z%cs@SA~8S)Sz^bbqAIxH`OR@aD`L$LeM=n+sggD*st#hv@8Mz=DXQf9nsKMZvh}&n zv(BH&(}(Vct|$2;%OQ&Jyp6_T1ICX28Ko^sx`46ZT6w8XB2$MnrN9FQg@yLg9g&^k z$y`4@p#=l2!Q6m`+D|C?1}Lh>wK59 zzW`fJ;h0^q`b_@iH7o<=;R)R}D?1EFC#B-*Sz`W-EzQY~D>tjwsusQO7xn2=lzKPd z4N<=FI)8_V6$Q&b-NwhB@5%2_Ma)Q_Lq41IOBOapE<54e%+HCc^iAe$8k$!@in$JY z2%?501JBNtv#opPo;oGKNE3YM;tKrL99c*VskJCzesogSoC0EHY#u{$@F(rNMvMI` z${z##HJ?%K)?Ii6X(?T#yBb?aE(%)orp$nlKa4yEL`Owm`D@P?;&nl$oK1f@a0d-UtAZs`(zI^v*Omtu(-VxhujSK7BWD9A%mfc* zumD^T9lW8x$e};_$b)(7RlzcnuT@*qbnruA_?;1T}mbqam_8&m`v`^b;#aE`8)TZ}e|- zKE&?2XPPwMkO109bCZ44?E80)-xG`j+Lb**5Ow~9%!Nl;^cJ0lJ(M#qJvbOCBa?zw z4yEu#3j}T{?}SWOY6?-B4l*IiSVd$aqws%6Gcu^LFvY{dRg^m%mES^XC(+iha%^&t z9E!o4t~j^W3AWUob!uV_sYRpoTmeR@S8(FlG+H~6QHXUY%{m+DAMNnL7&cKm!Ac@O z6=YkK^MW^cI$j$&BL}v7+Y?FyR-Ixo&FpYJHSzuvn1(jL@@3s9MfJ%?Q?f!-E}&BO zy5jc@a_P}gtJENUdF-M$FN1I@Bhy5sQyV7j7^1G1Nf^yS-6m+q_An@r%Sr)AVkp}7-HWIxmU6tv`)Cu*X6mY! zatUulkPGp4ZL5G%gELb(c`WMzIf?)|Mp&_LvT!hZrfgzolU%BATxpgo#TH<#bxL{+ zIk=M>`o}bQ?P=61-0fRd4XhifK(gk-^eE&Eo zG{JAJ-0a*9pQ}4~$FnUBW`P;N2XN}|c?9`ALto`8{>-IC)}v8V64>NAd@-Qh{4+C~ zvqi!dA;>*Y>-aQ#rOsRQc!cPaS?IIKNo33(jQ#O5dN+JX^tR1Gyz=DC zBa3`Iqq)$DW{)|`ZU$|d1hgcjG3WHkT^0=z>>GTmzB)FNix6RGD2Ya6zQRb8AQdce zV%T5BT1-z~%KH^57^SR2d_!IP zs;a-oVSg$Qr|iIEuurq!Sj5Tx(L2gTV~C`eawaXS6IXeMucHEq0Wyj@k3GE8bA$Ws zWs978I-;G{nJRD59@r>hS5T7?E!aX=Pf<}yXrcFO3EjqM$F>p$Oy#9Zt6%gy7w{-+gnkl1e?^S6QP*#Thr}Q1zGUt=&nWTVaVAploPp3i%=(q6B;D zW|_boNIyYmf>Mlq@a#q3l0_ibmo0jA3k_ugO52w|f{(ry&}@@Pd6Jh2PEy`Z2M(Bq z$wun2?}jr>*mox3tb`wvz(i&LWIeIOPd^%{fyh-$cK3Z3M6tH;8AHdRETBS;DSrzZ zR_xnFU)Qr!o2B!Wa*|Kum*1ZM$kQE#rc!w=$pbz&0g)RKXea>Yc$ zC{K)`tK_cX>}I`?Lr)pN0NU$cy~M(A>04TUTfgFWK&VMwA!XuiC5`nG4}ZyzL~P0F%!VJH24cv zg{z_CWsUtfLXESRhz*jfE~lI(Bh@)mfC&X%JhrBNI(`8S8o7r}f@ZpvXmWRX1Y)gy z%hK?2hP8z5f0lUjGZZX_t|b*pi-@!Bc(fFEWCUmzDPg3}X9RI%1BE>ft{T14`5|8; z;bBu5-TFxt(&*r@%ux2-r8G}F?%zQ#qPTrBd#`7|z=L{Qs+$=h6}i8*hVaCXBOu>* z2gC1kjjN~@9LUx>5*6iBQ7}1okxK|NRwpCT5Se69Zrx4`7ns`HOoDti$KF?15@l#z zfmBvQ=sG?UGfx!^oD$34kY934$3A*$$$*^9abpvA2#dA4 zu>pV(Rib9K&^D-3f)Z(@ZH0#C`f{cb(=S#>BR~GGXD-38VppKEI_EKPDxywj1w#v< zXp9OCIz)k6AMJeKM~3^J(tzu1{{+NK>*rnHb6G|>P#!6{<(gm)3myR-*VXRjKhW|X zmAewI)bx|^;?j2~LxV@oZkG0QBE*H}JvlAv_!(?4qoQrxm~d1xa4;roE1zJBJYEsZ zhm0#Kz1;ou5*xmMj}{&3NDO#3sgQp2UuIDN6{oD2k~LvZi(K|atr@rEuRRw z`46wPWJ~{)EV8Z<;3+q37FU}rVu6(qAVD`gcr64FxRYW;W}dZ_CK#9iD!Xdcs9b51 zNX9PiuA4f?T*|zp<-uXZ$E4C`9m^D5}NPj&k3$!xc^+__vUip>F9!q8aMf9iXC^P8A z^#!pz^rx6|Z_t5q+>`;D?gOSyS6iGWN@D5sDi+9vak-&~OMUXcHGFt=pZ;r70#q_o zUIkDFsw%?MTF5PN`9b zl!oNN*1y0avkPubmn6lBly`VHY4lGC6j9Ao=7A-V=Qm0saE%bl4d(9+SE!OTstTS_ zw)m>fZb{&NWHud!o0;yLXn*Q2w(i2A(RQJ zq9B~uHi(xcddEjfF8ykZ2#r7;wi_J*R=c_X+7)RfDk9wXm=e7GH6ILf{AF4!EENdU zGWXDy)Lwj{gRw8~$9bR1tamgX6nHCvAx^XYDwZ5AInc@s&wf;yI>3n}NRR5&l zPc|(J-g+(!82`~(3d!GueskceWmml;;w9h zFQ=&7i@^r4gGoyTAC`S999Ak7xb07;_yC3Xp5uPk(N#<%H&FYSC8ov1BQ+ZB2m2%Y z()J=Sz!C~Nf^B<}Lo#C+m27Ex3_^QjT%MD+^6)tVapB>`mF1yMtHEb)lDUW(AofC` z;UtVfOur1DVS)+HempPfaz?UkY+TRA+XYhNHQpYRx>y_8=I|pol@rFGxM|Ro zYJO0>w#^Kg^!?p&SavqxcjJiHc=o5x#Y5f?)AyrUd>_O*&6GK12ALjxQ@f2BN^yvgpaW}R+VO@O~dS4;&^|9)7_BjVeSN-d?b zyqjjsuPMO59r4tDRIRel|Y7Nh1^5dCVs3ST1vPLc4|nhjo*PC#0jA02vI+N0m;&!(At28_P9-2HGA{l3yLKXA*e+P$5QBXFG zH2!7$@l>uVsr8DuTv0pyLa7v5NZB@fBP?%l>6QP4h{`Tm_VoYA2sZM{^tou z9{xWr6axD%8)7}|S9$(Ix_XqQzXDt7@u14~21)oTufq~O;_$dqe-rKov++q^F+|^R znDkoU68`OxwS$qUO4URle`P>n_TFy(oKQm*dsrr%1Cy;Y6{81bV)}#LHv=t^zcxS9 zQkmyPoC+i84(5g`O$4x`Xg>72OXY{dV0K1rzWJYYLcgqw`|C*GGdMyV}@Mf&0|83ds@lZ2n$${FY-kKkrqw z`)s<~xj)Z%DfDBiW=?o7K3gtV#JrmPWl)uWv+TG0mW5~jozj5Z9-UbO$&pN@muN6X$Y za$QPjy&CkfxntyP%HxlYv zKT}h?{v{$?oE<&ntK3uG70jjnW`u%Ly_MLPFVnr&XSwfQh%yX9yWioA z$)w$VaPv+CNbU)^_kPIxw=Dspg$SiWyGgi=am|Y{&V3LuWQE zW1`YAT#Z1C#Enq}ZLVLJHh@0Y-?CSl zdX3ff4aT5GGcv2mL=k7VtM)ttbk`qoW9j4<8W&sq|9m9`RNW(zl`(8nQv9lNPd!b% zjvLEVaMIB4rpb?Ap&E!f82=7u{C1dgP2049 zUmG#T>ve<*ezL@|R_=8E;#JlFz`aee7%l8GrQXuHQO+9VXLqS{gA^Vy|V%BtK{4(Kh2Od7;W0@yumo#q)|;1l13$+1wpNMFq*jfnx`yaj8Ie)&AU+>EJ@CrLDCp&xm4VcsM34ggD zk_XWm@=WE!VK%HOxq=y!F}Gs$v+Dmi@z=S&njatL-*D>;(FJ7sgn(bRqk+YY{Ov?j zUsFpaGgmLs$Ib}H-Yek@Zwxr*5$lwc2Pd6GW?qnnQ#vJFqg2~Vr{b}@Q@+UQkzwle z=ZSN)+Z?=Q`$S_GiSpkaryW8r9!_&FTFEfBzVD3VlSi@_)Q92CTJ(W9s5N&x#u4fN+u#3@-;2k4H zG;<(j%Crf0)HC3^5>Ahl@-jAc@Xe{pDbQS<7bnnmpyO&K!Ku|kpx7M~mGJ#YQeRxj zqrO8W4W0GAnys1|KyzhA zCjfu=g#V1sRl%5?|8)1A@O zWIuIl!_CD9Z$q=7MSfGNNCefz_nVzgqDoIR(<_Zv_7MLJupZf9gHK$(Ro_R8X8}Tp zx`|5PfRqcJwZskJdqZ2yYQYrTiE8?x!W~W4z?QN-<)54-DiyqAnwt9@#VWd4^K-#QPh};0Xq3Px~fc zmDTf2N1NTRu`W;q|BCN{ z899?b)_cQ1`V9*2--OT~m}w1&CE11I(C={81guiZsvZ;;c z;98ou$LjhVUK@aaa7!6|etyM6WxtsK>Uzn3oPy=*kuiO*nv%=ev#Uq8-@WdZfB0}} z)x#iyC=o98A#xV4Jq~aA)Ds zFl5D$A~NBg&a^3rv|`Z3qV{qhU5^4)y(yGEsn>0o+xnmoA2z~aj7jzZ*5&O z{C2WQ__khO)xPcNOr;*(#s1kmBj;y0p`8&O!UCP)chXXNVsQLdsIXg8QC|(wV&J>e z*i&S?flF4GwUDOPq7a9P|BQ)+by6cxTUj3m?$OiDDhnWJvHW?evcZMc(T8Pn0bFJ= z0&ClkiL1kLpG;yV{7*A?(1F&)_9riuPpCY}o_SP#I34=d+#hA2#zrr~I?8p4O6vI4 z9$q%ZC`ve;uMTk`zrhOdi`~1aMQN0yCexb zSv2c_!dFD++*rypn6H}t-w=A%`#ecY*CVB&ov(&4C&dek1L@=44`{(#KDYLkyKxcS z%XnhoxJYstOfE|JURepT7?5{;#rbnr(Ox{EwF^gj>(J3jhtKtxWdCudXHEzHp@p4K zf;W`%1lYUixl{hxs@|EQcPiwyIk91keR(- zv3U2w>?Nqcq`KV^&$r&f(xMS!{5ogwpQt+gh`PGUv+Ln)AFTH( zbv)O6s44&-vWHwrd6d?O%^EFCiWx-#4zU!Ie5}si-rJgZ)4gN zSMA<+XXQ`E-b@_4OcEyd$JPwIrOl&XWSP61Z}+DiOg`)vQhg~ZG~e_neZ&M! zz+RyGyC1GKL~9mu*`4(h7_-T4AruD;j*;Vb%TUu80CSK^F$6@*q!9Ev7hj}=%+4@g z=1@b+3ih|h%Z_J?03wRZ!@F$*M~xZ?dkHBHdE4DUUDwL9IfYK%s+g=5W&W%UeQ zURNS1Ic%3p{_~Bufa9nmoKu#qjwo+q;R%pbT1`J(a7&x9Z8#R|{s?@e&|*QVR(Q~! z|A+&Rma%WNr7K2GbdbFWd}X11$b@_9)Hl$7Rkni@nS9!G2MT}%O*cjBhk4OkQiHer zhTTX)bT_XGz%8*lkUoB#H2A3yOZ8R*&rb07Nke5o^rtT~WCZ84G!^t_vChdB8d#3X z`&9ii44&x*8!a*w^2<;GCjY4|M?r?B^meI?H&NqOiDTITc`4f#WvPb?(rn#CE_O8_1evNmAfBC<1+Mx*>idm-m5>`QcYA^ z;-bI4IQ;t;!Nr%6RxPvnHNyVCJ05U?0FhaT>jii3`eZaJWKWc{Ln*Pvx95DUC{xK( zA&;q^4w0S!k`Ru2lPuj0cR3i9cF&|2`1pG4~Pm*4Z1+$CS|YtzGH3kpWjSje%kVtLddP zo{A0iW*Ip>5mr&OLy-_~rfU%uPHl}7J?dM|jlSLz;gX-Qt2457;vkRMp~thOdho>A z_QE)^^GYo-5 zk-}6^?`@5nlDbEZSjL#|nkMucMhCVRFTYnxCzb3-!FwB_7V6{u)%ZCOuRAPaU`S0?tSQPowVeJ`ga`#^N`QpvhPf)B44M1G9zy zZ@vzC81rWVe%4@oyV{jrwqw*t8u?Sar%w>`_tFKL3_a#1{R5B@!A85-4r=JU)_6tz z6A=rNkyO9droGvR8lb5q?lDo@@l`lPtNDrQX47C%t#i~hjgcJL2E7m5pWgdp+O&`) z_UGZhYeQyw&d+7f7}#oC@?_4`ebh@QAlzTF%2T#biDM{ZUonMfvdEgQ;&Zt;6#G5t zDf^+GbKOy#_J4th*V@CNQUYrqbjOGXrWO8n@XEeu>Va*66JKp+&7vxr`yIrX+FZ`_5^WzC$#`MKNNmRw3Kh8s?VCbtr zdkUw?J*0s4i>qvp4IWcu=3aWUSxPt*AzMu}lylomNs%R|f#zNNe)hxBXTE@>9flB>v7DAWTau2dG$0!<%Wb^Ph`A6941b*}OGgUU9jP%=`!Vr{}U_Q96<)X=0oUErfpT zF%d~`^HB>Isq(N1IH!NlaZU@mR8yTy!qB&r_fx$`JWA149|nMUU7snW6gD@ru2@~d zqNnrg&-&)gzpbwUNyjzc@EbRFh?OcHPD0n3eKAKQ$vAbaG!0s%#1RBdd~B7F2t;5&=MZli{t+rQvO0H^SY}To3@_x zC$3or+X}JU#=NdER!=_a9E~Vps`r5*+l4kuA2NWVQ)Znu$0sk>`;tV_KG`VUk$6`$RRfDyy_+CL|Q0RC9!ct)G=tZUKKBt2gfr}xb zh7QMQ$M`kSSazdUJN&;d8y5q7+Re+h)pBC0RA?(hOUSOEc<>=xTYef5roGo@szxO5u14$NAGwKUO5#1Jg$c--IQwl;m^tb`6;F3Z9B3Jk*H8G@a93A89O8}M zO#uBJxtqWy$wFj6%*^ziGt|+5K{$>ef2!*P3W|hM$x&wZds%P#HODXa8KB#dQ?K)a ziz1V#U9<#USdcTHK$}2L?Ingj>K9@B2MfJkukh z!%I+py$XTalbW`tQev4#&$eEZPrVpbHXo_eD`Oo+{;qaZrhkMkdsF=O=|#6p-tmi` z{N*eYt=rQdh!km=w@%G6$F*->qNGj4-8xmf__7k!Kk)#L!If0;x=rmQoTyg4aYh);1u~rYnC`7Vier#7m5CF%5&+MuIsi8KOBz6qcE z&DgW9@YM&FzU>{X@Qd2TeHOvQ5t2G}kIe#l9hN(PiFxIrQ;uPGJ=Eka+7K9-2$eQx z+v&M^X*)wzx#FA(GegB!NTbN9JRh1WKC3@Aqvv$^oHsx$BemalI;o#TAnKGAeo1?F z9y%<)=v4qnuig4xlh};T2tj_k!^h|i+ONpoM%RV9c&}RIB{~kKeamL9wJn*~y3QiN z;1yngM#_HseXI9lJ604T_jgSs@>fK;)#t!2B@*c~e3u4trxnFUum<$1{@kI#U@?-q z+HwMt?6QIiQP{%BQ{=>vjJ2M@?QoE{5pN9J~@kF7Q4z^7qU^E@BReU{2oVY+Tah(OSt7E%)+~|8 z&LCso*K9@heHUdNlr5s{`#yem$ou~0-O)nqYy9ifcu_D7u1raKI?= z_7T0GM%Lh-_?WiD9AR>wHsemFSLIuqvS0XQ?`^uW`n73^aTKvx(;aI5slwinbhk;B|a{w8d`hONaXz(RC%IX z$?3Ake9wAm&ua^cBLuLVZhK&o%!}{%#5ZlF3gxtvtoI z&emsv4Fr0}r{b4o^-C-1KsAKyx{F^cMYT*$E7 zG3xW$eC=MEZX<>B13=w}&4S!hc9bC8Vyj(PGFK~$A$ocyV*m_Xt`))bh~IF_A9|W+ zwEBt6`sfOM#~U4Nm2L!F#5)q%nN_qxWGG}4Cnb8lFgIEa(im-^i1}gzO&!S}=-Mb4 ziX*880T5g055;!86vfgME|iBSB|N67Gndy=k7uMyyA^1&5d+FEJUOZrhI)c9m;Kis zM&8oxuaEp>1kK7Q-p7JG!q1oX6E;i-n|LnYGJ*eZ9W5NfjNNKX=V5Yg_-}n522vZD z7`D7^riHPi2D{yI)a6Rbd}5}XfN>#^_V#$C{-?`>wm&*kCt8Uoehf#`}cI3~qCIM#(_Ry}y)h6%0_Wmlq zxn>7ZwPaaML-0GE*Zc;i*@wG&8P{bA&GMtWf~;487TlVzqB11Djtk;F`NKlYe-AeAh{jE7rVznaOh zrwyD^fpR+K{AiY&jS;F!_p|0eRfM!MlI;BNu#EKuIMeT7s1UlK-@5TT_jp~3Xrq_n zGQcwId#$30$@3GGJCV^zB>vmT9%Q$s*yEroLvsV@pE_`a}0+Yy(aj>_i9 z)V!5?8-E#u@fp0L-bp@Wh)oaOwfeT?Bxk9``;7lp?*D4|hK?ttBv=Pa-}nt6Xe;`N z8ex|ONo|LAdtieCX~lkIsu!0&6L8#3D4_sx`Hrf(+xM2?{oEb@Q4GGe)2q)W-q`7< zMBxeR>onWkX}v4w{oToYGe8T}7FToty$=cN3SzpO&%9Rsk>ZNc zZU&o(H=nckMGy_!6dU|R)Fbix_u_J@OnmZ~i#?esA&Qy`jj)UGk6W)^9jzz~8}?Z! zbI`t>9dWqAz1tO5i^NsDa$`7ZM1QCqVGc|D;_>lQncT7RDI+}7b&p@464s1T6Y48RSoz4=)%#jJodu!iqbDS zL>B5flvcr^`Efbb6jjx{axoVS>-K5zVFN5S6A+SSZb>Rbq_29hld+D#u{`3q)zcF* zX@ucc{xf&%B}M++EWFHg1L2anoWb6Hk<)0gQzB}3W5>M9jH@RhTd%%8?s&;lFqv_ld7V(+MWBQ`aH432AsQ%OX^lbd6$G_Y?NEH=Ev+8NJ3Y)mTgjx&rsG&)G^5@!6mh+(&YiE7j!~ z{qw@=Lf`NC`cEf?G)b(mW?Ro?^(*zF^SyZ^8~a_SqH7cKGX3q7ot zUMq-Jq|3f(#`Xo*@DG*n{@z5}YTbi3lGI2gS)rk1Tz3t&1`K7!m2MK3j@u*DN%h4> zT55(zsV9qsTMYhNtvgF(jhN|m6_wf64SQpX!BzHergva%3KnfdY!1uHMTfD-Y`V^x zl}7bN+|ibnyXw3}!aM~rA+s97ltiz~sqe)F@m|*CPjzMU9>ZZ%OHtz_WBx@9f2<6eW zKM`Q=>rBJ@k^gYawa;M8$p-O{kw+P*Zhp=EQ4(zJaW9juL$+ z!}iNwApxk6;h3^FNEzORL65e+jgXBzE!59`nlD8@JYt|&c?!6Ab05;XSkt;wB5CB% zZb5({G(3xGr-?a78T!cMCHuwRF$Y&<_$HGR!0?eKU?E?pQR*d?x2cJ(XuEh)SuP?i z+BO1LFtm*fPdf~o)K9a-ATy4dTul$tjH^b3eCXidHAVrLQw5#4=-Jn7IO*Jw7hITU zs_K$n%CXeunv38ZDsrF)#W{NggffQ_Y8T8AiP8dEy0`PC>es{3+`}Q9;{Tyjo^0VY z=+0azR^5K%onY0<+0!;p&hc7bcLDNzYnK3eIu7u(hFds~L@36y+>|nShJx-ZS}uT2 zfYZTCtg!08zY)L@4#v}~x?M*5KK2nGY)zV_`rUHmeNlINK7ro<&@c4-I6EH+RBgu4 z1ct^VI)!m@#43F7(SRgVJeL^LV#YAib2v>(I$jIOtHCo}pPWKQMLW_6O`bo*KND!h0Ydu;T`( z?8}bUQ%7Cm+f@uHs96>D=z-)rpx!{wp+3TQp#ZP1&c*)+u%XRv&K*A>#X-n_8@aW_ z#t~3#{Qq;7K|vDw2XF4Wpucu)u6R*X?HFBfd{I96hP$(szxd-n(DqZk^wSa0<2P?P zTX_r)50=g%g8~C8<@$+6hT-(=Eb~*#zW=Ve8YjeZ!G`|j1YoGfc)W}`v0wwzV9VF? zDQQhrse({Ga%a6b(71PoKEeCF`hCf}$X`>xFo?K?BKi5#xqm_^QicGP>t%iE^R?l{ zYxw=}3SD&YwnhE^-1UbT#C_$?pV01S6#RE=c;+ z^r7X3#&ZXuzlp=1{Cf_)V8I>WHpH_K$Pcql_W`%CoT7%|{&lc6&y%S?=O`vXNjxi1 zqV<-up9xS59X!$Scgrm7aLXBxdvV%!RsiOK!`8m+;C*xR;KZJxGTLSnEw zqtv9y*}{g^(I(A2)<^_V`)xw@vmMq2umc}sheU|?2Fs86rlh`m&ZuUovi7~3idr5-PEnBNF&mHi0CLfDFAVqQ z#Y3A}g&GqA4tI1$0ZQW?F}&eYEG>M}d36u2i-?|XssGOvF!B&4!$btX=EyKB{0BHq z~vg_h2HEb16i9`*ZTFjaXN@)sCx;MeuxD-J}>i+#w#&tB1bDR zI39O@eK~L!c0{73dXK^ser_OzdonHNZsAaulPJ`cyT6j7inJZ zKQC~0{tZL(cx*oL?;0fLvJX~kqeFgO&JAG^!gHJlfn|SW)QCb;@7~{j^-74lmBd^{= zH-;goE3mwrr2X7BDY$rSsC+*m{{5D+`F}`3>5@CwD?MmXi<;_@_h+C)of{_RRef`D zXUfXQOvgwO8PZsO?*EQK5LAT<-Yq4(e)J7?Q>lT$&TXgX=~;2bR56`h^5CR_D!%T~ zS3@oD!>H-Auj=>*CTDye8p1C@_yXUy=hH8tJ^D7LBFc{W4QB3D&oEn z<%CFQcKbg9OVAYy*BIS^pZX%Mf(R?i56wd07B)K3-n8R*rbS_hXL-)0rFrhxPyO?W z7ghrU{I$eMt<=Ph6vI!AqlXPw-=7_e=Rzluqx{A5?zaXECcXS=xU&>~t zmW_(PzJE>pm->|YMU2_GP1V@f!kqkWkbEY!iE4BCc;cSoP7uS^Nl(6Scj%@pJwZ0UQL$zgSY_1%PZUUv>UkGIxA2AJF{{)CF3KYASpfsRGYU=mHLl)sl$J< z^r#HcEne@|)hvdE34WKmihE|b1qPg*<=$V+#$gsZ-9$Av5wk(Oj6RPIxevc?8Z0v= zSUsn6tP3pcGc}+#7%@ADm25?3&g}K`@4f@v_6D8gwse~D(j?{(%Vk!Y}3w-pw zF11FaEMN1VjMlrOT`cq5Zdoz&ZrrhRF734Ddo48PZlHmx6xHSMUFqrEvtmA%we$;D zFt^XczTOFsOrM Qg@Dg}dDXigWS{u{54zvCga7~l literal 0 HcmV?d00001 diff --git a/static/img/header/icon-harvester.png b/static/img/header/icon-harvester.png new file mode 100644 index 0000000000000000000000000000000000000000..f68687062806655abb28da7929796d257267752f GIT binary patch literal 6994 zcmeHKc|25o+aE&6o|HCYED^I0W+r2&QAoCMN5*UhW5$>mTckwDQjt(ZBxFrQLY7oY zSrUapN>oHqqW4gDclYOgKJWY7pU?CB*IDMAb1mQBb$x%=^_z2!TU+fE7u_fd0)fQM zi6%C{H5@oSg@u4$?h7Y4aA^&)bL85P0>Eqzi^lM#g1LcgDwxV+&_EzwKhxS!q6sWk z`)XY)h5W&<%@E~Th-{QjL5@MF z3~Y4hyhPi2$|k4E^!kx2AP`?9!`Rr`+}QZfjDS2cLX!x@CTrQqJ+_rL+a=^=ecP*4KweQ2r~T-Y2y;%oXne`8VXIm;^gTFR_K7OLXS%Z z{qx+S6Tx%P-M~QkJn0x?ZOPg_%zmL)p~Gq#ceYRCrt{5LD<2xI8J^zsD*GOPM_k(3 z+0DME9W}3&b+#5!JeyB7BkjA?53~gpaR!Hrk_(FhtqwI2T@SbpiM8N)vHj!Fuv`g?u#>J{j54LHfKAoaK8(9t+1FQz>B^U@*V-heSgV2F z(`PMkS?}e9OM?8o-i^%iwLL7d(P?N@MA%%M)XEGFsOoq-bbjpXA&_JVCycf52?>-Q zj{%gJqooC&%wobw6qY*`#$&R9+5>@f^?7U(*^9~ryHn{59|GikZ9N3cpb#MYv@8*p zY-6eigBZl2+6GzKk%PR*I0{5xPgIwO2LPBPta2u*0>mY|rHT9?Z zaFu?Appbw1vk!8-SJI)7;Z$!b6EO7yMn(Q@$erev);~R#DWEf$>=iG7?7vxZ8MMF1 z`de(vnw4~Z4FquiiTgL}KXYF(2COVC@g^+t!R7GGO$d#L#8U;<& zf>P01+E5G%frjERSQM0u!%-0!jFt`!L;eNI+{ceg@*z`~p#X3g1HeI`(K<9Tk_@F_ zF-Rx|MohT$O$%MqE0-y*K z8jnQaQD{4)HXeb(Yhj@X9X#R}dlrR33;h4AmzxKy`+drZ3_oD}z!lN=je=deU zI0NGSZ*D*4;=efr82o#Xf5h)^x_;C3j~Mtz#=ooUH(mdTfq!KDySn~kbcz1;IYspW z-hu*vk0no^_$uH-OUQlqP7~0#jQNSP{$?f+CcSg%FjCP0 zb38%7OCeADoO@_X^ogGIX4y=l8TztTg8u1w1Jlh~Ilg%b;~2H9__HRMy_<%RqR|bo z62hYKg*B6Puk;?cqzufYlZ37Zcbs1ct)HoT5Rf3?nFXGAv|8P0XRRv&mTv%qtb~t3 zZh(CFswE=A8n-2b=~O}C{7!Kb-+TyeZ9@=<9JxPO>!v_-T!wv1zwlIJKNd@P%12q# z7bz{g>-q7|Dr2zP5{vNc9wh^3$-~2|joeAcJ02aold5!J+$x;F(R7|t0O`weM9J0} zs}c|QakhYnTh=?5x~Sj3VG7UKgk=BmkKf=Zp+qeRBlg6FwX)fdfZGTHakUV zWq}2D3|g$7J|`=4++aRYl}|6}ooVs9wiJUYTgBtiDh(aVAmd17h}<2S@WVzD(y@v% z;TsgAqD|wJDN=n>0>J)%AHu9B5z{Kh?!)Yn>nbrUMV$Xl)ASH#%-mWKW~Zv>kYNmE zmwoY$9Pg~8jp~NS8bJT3+p2sKdbnpZxKfy*p|07 z^#U&}P7|6DaQ&u#;aOxxt%EJ=k3t`QXVset?!LlY9faf*>L3Ru6rLV zLE%dh>qR?N)gO@WT+ZQ+R6ppAnj9jsc<*wQRte|!e3L5CMQu%oXNbA*E7uOQ2Mh{& z%|DAZN@}R*-|Tlh^8{)$x3E|XYozS+F4#Vyov`Wrx6E`mB^l=i*N&!2C)yra5WW~vXY_IJLBh*;?J+&BdUsaj^(JsYhjjRnJF}=~QC@RT z#}?5?j+#G0s#yg0PgOX+xO7EK_R0wI6YM)w_zTI(XFCp~on++l@ zpHIKM6m09?7zrxc)t42zDOKalOFiKu2F{%~HN@vS+qaLo`mG&HN$$5_fXLX(UT6VU_p(a6k zwf5Ub8%@#-U3Q-uOQkQp7C95#Ehy&K(p#$60+~LE}z~Li_$xE zHuDu*{gFYTy(}*{R&pNiKb}%G$eAm9T^6TVIpGtT{F=3n2Tr7t zlI9-Il$<=T2#Nw9wbyzUveZm09Jmd-w8_hM+o9}F^0|xitUOp+aA8G8%MKW6MNPfBwVd)C>24aQuvOyyXOe^IzA*%bgkJy*S; zRb$D%!roxs!sodo@YrBdysVr_RCIV3~drtOB=82(GEEn9~)z>j=-Dc)B z%f850J09@cJ3*gt?$$|;5S;dOJZg;Q<7mWNE5iH}i~0ngY?%iu)#mm}x!nu&&DQM_ z%%>0s_BX`6tkV$gmKhyQQIK;mG6uDl%X)J59~$GGeDuW>{JKCuPmCk0zy`u@zByyr zKRlz3j+dHub*2^sV*1w6YxwMUD-w~Jv}ayGgI(?@&3(eTp!@tQIorEUA>F*YE$LfJ z{S|TkkxySu?Mv&)v&{_8JU3y--A5S`s1(^$)T6DfO~vN-b#<_+k1^88s%38{?)Kei ziV{d3PVCEbDi(N7YAw-vjErKJI#+Ix44uLGSG9+SPKYxruKB)O6LNjht&>oH$e`|U z?xVg9Z~?)_xNp9*N4Gfad|6c|&U-t1DYjd0RLyVsAU8CiZR zu4teC;=5<@)-Lp5{H=m={>q76m$*~jB3>PdSDmx`Js4?~aUkd5-uv?~C@mz*5HO>O z*L_raCejjgzuc#{Uxnb`*Igx=l7ZiK*0RDu2IMwZT;Y0k^wI46l`6i#O9=@`_tr{h znQuKX>4_NLwExcA?@z?d9Ol{4E$ii*Jq`P@J~HxGSx4};v6Q)|(1=Pf=)7 zd++_s=uf3C6FTabLx=W#b{wkn8Xr(i;3U1yO^%rKv zQtgTcABC`LzNkKUT%z%$T=*7WqM-_?#3Q3;_DS1FyRMLQVLHLldipp?@bcRu0=ADE zw}IN{ioMjS9}pu&;P}4B4~&sQ$@wqOt#NsmUW-E|Zaui;d{4uMCA#2TK($?mh$xrX z{qQ7z|7E;1|`%_ zZV+F|x6WC&2d}BVM3$0? z8IL!>HiP&h{*6P?5R@Qg+&iufGfdMLdbf}aZfDZlK6*5D@P)uAvaNT|c%MSj8O8zppoYz#MJMNx;fqEN zB%z49(vfXE-a;t_lBl|dd4(KT-;b6J$^O7=G05E1%B0xHE#iLwk3N=C&-K37^V3E_k7Ozp8I?5I1?isMmjD!5D3Jm zr;9QJuCIW<0vc-I|0(GzF5vPq(BguR8QLH0MI>Nx9vHAsfENagA>*(h5cwnC1j$6n zN7Jx#RTW8L7o7OiJ?-b6-euvzuW~srj&n-$-)(m2{l<9*f?O#q$YiA`HY?_P;`KMtJNd@TS-UmkwsIcJ-NMqQF6qy^9Vx#loUUV}NT9kF*OdTQ zSh-0>zg2rb>KnySu8q8sKBqYdL=lD4)HKo4)cn&Uz>n;Z6cyc8bXxgD?fVdGDj(sWKTV!jxPKYb8KmRT3&qm zrk$DV@u@TIDh(TrQvTxi7}o_om1b9IVyu_p)BS33%J8Hod7u?3gYIOL3x*ElNJEw#mD@RA!x>?UmM|cW0yR zv6^<+C&@Q%gX6FO`0058iL4(P74^I+;tipeYivaEq z#8Uj=Llqx)75)o`CSXkh5d)Tyl#zr=XpwP#()_A)U}d5+7GZ|c{sRJdQsH;?@$o`P zN%{NxOZvkl2}Bnun4+Sh6jWMDT3Q0okRSzk`k=`Yo+N<-h+h~e49SUz^YXzFJi!N; zXh(vtj|x9OFb@86eRwZJ!@uA?Nq?{a@F7J;dr840p;CCf)bAD~A1yxsRF(_i*pzC@2hcbuK1Fdi5@ph^N}h5c@8;jJu85rc)JVG`0#7_5Yhf-_V?5$+@>fkva{V6x7#XeeCqHz++%k`LO` z33C7ifJ@>49IP`ICL^ohBq8G{1DBAI#X3nSDmr2%6y&gQ1sOSMv?Ch+8-y_t2V^DM zxoe@G50|qunCh> zRDeO@vQQbQ0!&)|*P0Kl&tr%rAQKNTVNgl9%%S-~SP+0S0I}$UJOuy_<$x~;O(F*E zLm*lZ2p%f@2N2)`%fE&Vf$ijs_Ccf2J{SNLDh)@#pa^NW1xyYhEsKE4NI>Ng(BJe4 z&Nytq|E7JgdBDoQLavJ=0rLkOihk`VbBy<|r(cgAxWlak1|Mz;1lsAB3na83#`$oa z0M@S|Cs(wm3kIkjf28Z5a@_xr3K#_$m>e9VC?PG21yTV9mytlrq2Usa&eF~b((jFJ1qLfq$g@Z+88k(M9*4b_(MOoPzv;W~q0At`%sss2%lnP@w&T-#d-> z(*Oyrm##Gl1Y!b4nOZ5@2}RC;@R*ODq1LfEx|1hQQMD~g&VxYg5Iq#qg8XqYCqUg& zYrMUg^yF*T2^334TWPGVckP=8w$I;pkLW*Fk(buLxK{a~TK?TwJIBelBQTP_rXgFO zf#x$46ADunj5*TJIYz0Mm#$#4(Q9L7V{3KnQTNvCt?V0$By2PiX|&C-j?HGIa%rSk zYU5n}bh;n(K%M1degZ6=*h3&iLj*CVH5%DGL|JjYl!!_}Jw6_BM-D;wsZ9%}O%r>T zK+0CjFhWHMyC+X3vEf2NcsAi(dzFFn2F0_J;hLyE!L=+6Xw%Dk)!PdcwY?U?61o?3 znnRCJ4AZF_RGwux%5)F~I6JK-N4@{{1nd6UBW!+Gr@;&rFQe8$YG;-fH|{=I zG~n9)4i6T*APO6O_DnTZ=s5B->!SOJv>`3`DVXqAqRd3}fnGh}m#OzrKl z8EGq99(rH~zI0KKsU`NfbkF1ciN5hk5oG70hZy><0VyueNQnU!_Ctma%Bw}JOMG&rJ3};m+j92?V&akrkTVUWL--BG(lzx2j{w7g^uZf5);|!!d=x_n4~B; zZAI9r`qVYgc(H{&Pt-uNFggl>Vve?50du`L7bcjfFGS;1g6X>?9G-k7S>q0OmfWKV z_fHF;u!Lj>Q@aZWNIoPD{Mhl24O?618Hxey5PbguSJ^Fh zj>2vZx8gfp$=%uDTE@+RXL=2jV9^>Mg&`Ku)VXPBsH?F&k<%3hf(ZbR6 z+tG_A9o((2Cg9}xG6p(P+PwWia^M~R&w@1#=aP6%zR+h)6?p}T<+(a>R(n}f&GlZ= zd`sk3!{!%XIs|RrPHes2w~-}+?AVr#mx!vG@1oI8_-BPqDpu5nYk%vpM!&MYZ7bTN z*C$us$t^$XRbkV*k$0qs7j@JzcFuU6^lg!&oPNG?d149#imHs;T~7*1-f@w6Hh)U@ zY^-+v%TI&7C-HLzkwyACIX^SYrUONd$3sCIiMiAj8D?WCU^j-$u)AAbO)STbwv?yc zw-1@#0&mF~57#qqB>tqZzo>shFT4nv)rUX7WvLwR=fN|$T~H4=FK0X+z+%0VgtRf$yI~kEgJgi4*CniJ z4nIk|)K?XSOuAm&{hIe?)6}QR?%Nus`gzIVDrjXDsuWi7EM&XxB1cc6wyd!?#{RB< zf385>yb|->!fA?82FNQl8a)0J7nF5z*7$B;2FSB~^f}xwDpmNUWNdLGO*tFywYE1D zdF?$@Z%f?P6nFED?g^<(n;3kYZB!9ko=XG2j0~VZR%ggfzftVuJE^U#=EEhW-xc^( zUTmYEJh3U4qwDZ-ol-eun1VB3So>^jR6N^AVd9g|9111ZCGQDMW)wFl??wJS9o%XyRDy7d<@Phajcpa6?n9uyb-aUU&@F>lUKWRDea+>razv4qN<#2d~V4xLs z{w#Q&Qd^j4%!&$n zA99LY(WAa6_>@>#`SZ~?UL!k;46K6cH(}5HlWh5IHJemyUuT_ z%ksfASR!hCtom4A{@j=4P0ydY>2T9JnJEFuT(zUG>-BWmSyA3!uryDg*j%KYRj9N0 zih$E`wui|JEe7HT8`0a{E_SaiEQ&fLC+hm-amnvS!DlOw~i_am4cTPwg+rsr4Dfk`3RP zq?%GuNC70n?QLHaD|d76YEY9)_0lm~M~GvjNcH8>dfp#-Z|Zp=v19qszNjxK5xx43 zcmBH?3!%y2z|U=I3$Y#-$`j=nC|yUHSy4$WS#*8Gd%Vingv$yemW+TQXQdJp~d+!VyJ2JG|xuIf$(FT6t(kHm*yv5Jls`@N;D5tN~1J!l}N4sJ~!h} zB5(Kay&54!me`k&Kbz86d}Ci8WU|+f$SR0W(^yPM<*p^Sq<&C#dvP%~(3g60R${8{ zyF9zwm5cQXu!;g?_e*_)Q<~wQse!n#f2`QFp)vMPag_u|NSx{c@ z)y(fLb*L!Ragi*+a4J(pV1}C(M0VS&y-Bxf6C60X*Gw32luXK8{M_*Gb}y*JJ*psQ z)Ix7jtEYhLky|-mwa1^qDCoC&`iY9_qASdbbT! zUS-?KUV5oEvEXuqX`c4$l%Um%kBQjXp2Tz{BsO!HcN8=5!#v{hU?vC4mU2*L%cQ$V!z}W~#~z1^Hi@bCZuLIE#vy`WY-0MI!mKEzLErV% zy~rfPtg86m_{t_!9mEK4ZAZ_bVjVTHeI#jQAfEvy&i~_s?@5v!t zus@eCXDZkxtfOgJw2EXdR#o*OH({kaD7m(Toh?}a5*uB~@w9Vyyk*GL$${Rc+Jo;M zNQ~L46UvHuB$H-d+x-0lGuO`hsh1_!q6(!>U2(?iHmW+jneI*+V|)CMq>4T}I`6@F$f7zovL+CF7O6V|zFWgHZ&eW~X*EW1}Y_O@YvxEsDWfH1OY$ifZ zoVw&Azvi6(I4!s`c4*7Tq``Eo@=WmgkoQGT72NkL3mu+b%b0{!6hV+{+R4N=cvp9y z{oSAO=62EF!?Q|vrtt$?PccxEYyb4rExAXUbIX|>g!J~aEAK|5#nu$${e4{TyNMaDHkk-18-6PG&Xw0HKy6=Tm4j(q*R4_9-Eyt8|QJED_>J(WP18hKklzJ9%31?i23SbJG)@;k^T| zhRchIGJdy9<9`wiF-EzL^5IWTeGT&Esc;?XT%Jw_Z);fFWSZ>^Z+v<1tx3y!Ai3JH z?B&G=dR@(PQ823$m!WG*NN#K91R-9 z?vlOrI^mxgvjx-;o)4vC9(qjWS+E-UbN9rTLy8+AB1Q(Y_k$8lz9?swONz!R(&p(* z>$K%sVE2JzUV32OxzZAE%yXi}>@2h8fb4nT(s+8U_-)07om`V88}oh-*x+};XffRv zS_Mhy509@A<-PnQ%Nri;X4RL&Li?__i#{S{AfkPhIfR)vW_sUq7I$%7#5cQpGzNHiSrtQWsPUfP=3IK$-a4^Wek)z^W#hOJ_gdg0 z^3%AzXf(jsh1QMl z%c}KKzrL(y*-=ahlC#F{FP;Q7qY_2{GG7nQ{9 z0&TOPTBg`DKAtWLq?_`s0Irrk9d_V9GbDC*@>>mv(6y%B;zA2U~t zpw0+EG)(|F$aP0>*Q$qevdA}&H&w2L{tPNX*w^wlW#L7uK5Q+>;juAGYUQ>CB~zBp zbA^S#2^lB8!D#+tC&2Dyw0iDhA$uIBFMR^-TL~VU>Gg z&MK@|Dd_nH&r?%3tNAv1T{C!kJIhWiJaNBvV{g^Ycgf~Vpuf&In^KnO4DfQa@Gu~9 z=72$7^*f~v3Vha>g2Ee(MfG#*6*m-wP2@RB&xOxYsullG!x|Ez2#SJUS+^?LX-ta@Vo z-cGmIJ{IXGBY>6(=nYD0qC(bZXd6Up)n%oqPcMF78ESMhTz*}nv|0A)cEVo&ohJbl zVBlDpy8qO=jQQ&I-X+ZB9d(-S*X);X(nfhPo-XiH^dyYw6HRDWWXgmXVm9v6zYko^ zD8PJAC)D!YkBfM6dS^pBrj2xIrvF^o{L}Rze`WV^f)6|y%s92%+4Z?$*6Es$D=O(m zyZ*HJI&NWWWFjc#Ch!_)A&$ejFJVN@yC8VT}iBadyI8bkZ0!pV9XD^ATrzq+w& z1$^se``;R3IXi<{?|EOS_0^hNzW%&0=jkl!lv0v@X-VJQw4TevUmR#-EM2nR<+7Sthb{hw-{xU1NZh+6+8$8jMVj zB;CUg#l)MB&3(x}Yie$Bg8hd5%_ouf9}FFlEa;@Pth9&~M0|e#`UwTwld4BBGg=Lq z-L=;e>UA$}3yE8o`7>uMh(!dD@AE;9(dLCNMMqCbHLK@do>&pLGj>ON98m*0m0lV} z=QnV$*$I82{Ps^PAHn2Mh2idQdy6vnj;FwB-(`vEDA3;7r+`1US~$mtrZj)H7@q#B z&juV-C@@U4c}jyjewyvZv#mB?OHNqx+$&sN5bM7SMkGo{Q1$hN(M7q+cJfYcrN!J( zLYy9W)fExW|E{`hr&L;lNu)+w85!=%032HCjJZqKy#?p_VjRr|7QgD935CSkJ5EeS zo-|%Gi-%lh%zh!ZG@J}*Y{US`yz{JOh%(N(&}WgbLIR^@_~#q6}{ z8(o$u%}ZFixx0IxR6~?KlvOytc_nBAC8O@qKBrrT?c@zGk^v&bd9!)#Gzh#rGQMUV zn+U$g@|h;Pbx<(6Kb|#LL!G7S=-sMpaIR@N{WBA!H#nABge@#TYf_zs(M%jfMjU)* zfEYeIl`70c)n`IaHE`6RGf$Y6st=T)L+2KyV3-1qy$?f&n}_pRwWdDvj%q=&B%R=` zq+~U^r~Flz{o5IKmhesgMLMM`Ix6#)87;3tu90d;WyrX#TT_93GFqfPUa_6)Dm@Ye z`nlh>5TpO)C`H&8x3mfhzX(T7PQ?yOOTiyeT8-dVvqY;SS+sYe9Z{Sj9}L2ZlWfVh zEFjKo^5kg+hjZTi3DuY;vPtmZR(q(M>(g&WnoURN=!G34`|6&vC=2s>jHNZpEAUKL zM5@w1e|V)uo0IZ%@aU7slVhGHr>&K8fe%DsuwYuQ09{VkTa*b>JlS3=Hwa!cMy?$F z9TgQ}kMJO6k=#fM%MwrZ&O#=mZ56Zr^Te2uZK3Rhq7v!(ZUt3pUqRMwW7V*I3rpJi zS84gCXJ0wt*(wG+x}r|NPqnjy>_qw_d5*U$45KAw$IS+42aAZ>eUG97b^>lfpM!hI zygd_bAb-$f&;t=E8VNJHJCT&Q1OeIv9bmYaf>y|kU2$+9#2r#jHWz#G9r%O@($g|R Jm1|rI|1U&_*Qo#i literal 0 HcmV?d00001 diff --git a/static/img/header/icon-longhorn.png b/static/img/header/icon-longhorn.png new file mode 100644 index 0000000000000000000000000000000000000000..4bf52dd540586c26326f8d913d14a601cfb19891 GIT binary patch literal 26769 zcmeEu1yfy16D<%DI1nUga0wcMOK^90cXtSSu;2uDmxH?mcXtVH!6CS7Ao$zl-uu1( z@ak10Z5t5P4ZKBm`UpC@3f-32|XXC@2^TC@AQD_&30B_zn3rfFB@x zaScZ(D7p_XU(n!uJ{Kq`b0G;~0cEi6aR!{H(QLwVrn_UD^+^4eu37(da`sJSfOWO6 z%mRkEK@Y`3fXtfA!gKY0n;inlLI7E3tRyk2AhDC6*&WzA$|J^lB+S>evO`+pBj~Y3qItaWw{75IavV3LjtUtV)PMBhS-Bftk z^Q3?KW6wA)Pmvt4Ov;=N-UsUCDa+(%DlRGdPFSrT*2X5bZEhm;TVEn!R7X>T&A@qa zKyp~B?eIJIjcbX+^I#nxV~luFPWEli`3L`{`%DgpuAed4)1F335l<4|nc11m)F{cE z6(ubdCCMbbwLE4U(O~(ZUY-Dw=+qx7A48t=o@K~p6k8i}W{k?|To(>Xf)49hj;(kc zM~v#UQc-Zjq)F;h3*=I;G!WuK+9$6!9=*)2xWArF<;oGZ{^Z+mE<%%N)6Azh7kExW zBS9r)B2Y5N`ukbGRFxG;Q=(%nXZ`95JaZn?k5Ozm?e1Sia4kWKElN*E z9F_Dgzzgtk;>6m^_kVn)AT5Vg4R4Q+c|7J=m86>GlKen=!OQ(fi?3japhg5Fy{(|Gf{#m1TdUj+onra} zx(v!HcpeUA_2viP-2l&MD|b)(I}B7P=)aHeGMau!_XzKt=KcEh$XM%UtQFn!R}-O8 zdt0rlhuU;R^zrDH+nBT`&tvp05{7wU2o`qgfQ&88?WWxJdA&TUR!5g!KQR0LjNVkrq8sD z@K>EsFij-Up+ebIVLQy9`5q2XI(~;`U}s$O*Jq}y7@%FeF2B)zZjs}58PTiIc`GLM zQ_2tK-+{#P>aagOavI*h!ft&${q8+mXLsiwQ~oovoX}V%`R&O*$7MbJ@}Ig5o;GZw z1q!Vlp|+O|1fFO-n9>ohh!!qFw@q)o%k!O;G{|Uw95bEtuk*$`yM|TQ(G0zgTbh^! z`PYdy)dp-cpnuoACFvaD+NsV-q2WD`l*{m*HQ}2#XTi}y9x7r8ffelYzPm1eeq5J3 zeQ=fB5X9MJUYvVShmw2!kFuaq!O+mFEkY8!zmi(-L+#qGjnuq`v=uQ1F-UC+J|awQ zxOY74x!Ps5lG$Z(l1MVW#`^uQx%m8muiW(I#!LP7^}6u+EYb~E;+3{mdUg*C^DoBp zw{k3xNB(P#7S`U+xA67}3wr{N|9BcdaTqK+L%prpVea$AW0ana-7;9Q9tja0i~RG~ zb-ad1f;~R2Af3+|EIcJ|{*}@VZK9feVe;+|E8m|#T;l0(ZSoTE&?u&J(IfcWPSv~* z4{$kOL1H74p>O>DMhzc+19biIkeZtP;+v-}CA;#n?>HKEcI=l&&7t4Mnr|2C*8U`v zTD{LP)`9(k{%@!W7V;s$$ODa0*x1up1oZ#Dw$>p;M|OJbn$5i5{Nwv{S?O7$d)0{Y zZ^-EV&^f|c$E8Vh-$~DRdG{N(x8Rsj1AaWs?y2|2Tbhv^mHbx(KlLT)c45^?~hY&pLG}1r{#D$zLdA= z)eIP$d!O#Rdj3w*lc0#z434e))#v=L-DOF`lwpc9?@4fZMzHjr)G4aRF@Q$jjC{RM zd4802qEn0{Pp`}Qzpq=vl;MbTDOU#R(^g^OTA@*Rw{6r%MV%ZAsm|A-_ObpuEfmZE z{280!6e@OQqB>%t5<=W6C9>{SL)z2D*L1X2tn|h=|7wMQy$@Dhs4|gQ=1W)cUb=Gf zrypVwu9FsBmtDcE0=}Q9{x>6WG#&IDY8kn>l4ojN-_QwPF@xKKns=M&dU1tRv@Nd= z`nvo7V$de&@05%dVWsor(yerI$3mZ1Z{H>4Nm@NK{`kk|{(h$<@0ee=l2#b^ zsPe!5pQr=ymG#6CUyiT#j2}ZP(gweDl{^6PO#a{}*n3_zV8Ns35DkxyTAa zzet=p?zkD67l<_;hAo2opN7CNWsKrht-dT`qhUj>y+#h}?S)U1d~7Rz8SL+c|F`M_ zaVM^S-5ZO@`3yjf!dT&Q4?9n#aly1hK?L%Y|JhY10`wHFESXYIBlLe^VUPlP4mXdZKIWSJ-Xw6S#@q9cWQ1hysHu?r=l`aHf@gr9!WW$) zRLhW0o~X}gZZB-1%Ml6{K>Sa1C|C!6V(OUdMjsaOB!Y1{%d>m)poyWeXJo=>CnqP_P%G zKAX~}JiNNYDdhnJWOgo{{{J4lP_T-yv5NMl^`Z5MVv^K5u&@7<=I^LFaWI>?(zwj6 z)x{hF_xu0*H&ysb3V~*|w{ZXYAM^sM4?|1@^#tnweglsl{-1K7{{Nr<-&4!PZ~^)2 z#b{$DYKey!E7J1sVL8HQFoFv!q}U^`9PE6bgJO?@*H8@dC9j6fL0U^P%rt_0ho31%I-)t+ZVU?)Ahk=;hI}(QDeHQ==knYu zO@L&mLmHDOAZ_?s(8MMeUCkFrF@R*g5;cUrZ@;ZPc5lFRd8Y{9(P@UrrFNc(4yYSa zq&_){)GN}rNGV&Ykm3f%2kZ*wljvm`I_O(ZD?k6CNQHCxL`%0XK6nVj{&&3r!D36BH_RWM{A{O`zsLkH@c7*{GK38Wy@|cZ<%d(mmRjLte=fe_9>-Gk` zQT!;J7c_HrtEO{3Jr?G=rq!hZYm^86%QMhE58ZF|kO?NpBJfkK@*m}nhSj8 ze=`0`HkhkT1*Rmdz%lnt^3UxTZQt)ROH3&dL(Af$`PA>WPER@xfE@fDa$eCj&2l87 z2F6DSM3TD3GW#3=LgF9{?z?AQ*C80XQsoX4|$p2%%j zzU(rmGE}&TQihL9P-v*Fq?(OM+M>;To{zF^I;;KW$of9YdrXkbviU}=R&=`u0Zak!cS;qB&qwW2=OlBLmUQU1*jQ6{WKQ|5Dzjf9g_m;|L z>lv}6{!Pyo$U9Fp-FKV!R_yiidSAOTh?=)Azkji{YUzn8m;9EQ^Bb5Zc$dGjlI|GWTNL z+NFckn4jF?j~hrX6z9Slpy`E6aOt}3=218ys zTGeK8w(15z3htDU7X|-GVZ{Myxb_L|nB9|3q)uhsv`1USERgTtjGIWFrNYxoC+37r zY1lRX*#Ro;>tUnd^6!|~ZI`TkRU9+}8>|{ce9Nz4h=J|N9}D8{7San*C59&si5nquH=!OpW!=F0*4zx%TYf5sKJmhphBvqW&w z7P?cNW-gm+;#IW^ogD}gV{FK1`%S`GeMfzZTvyj%{h6RcM^X3S+r07d4|{UJaUDQ; zhEjVO5~pK&Vf?vt`DOG+W5tzZ$i1N|vR|OkFYq6c{@cpscOSHt9B5(XoSN zaSah&>WejNJ?YkST$BIj(3X|;SxdMYd%*1A5-|{QTzq6{kO|>Nfv1uFIXnAhHS!xG zdCH<@2a8OhGq(+0RX{vyN}DQ0Bl1gLs4MNW7IRhccA85mIR9bxX)=8$x9M0RCdsiE zlcJWV%uQ_|F2_qp>bPEFcc;LM6h(5exr|(uc!&p{dvI@#8xU#dZdaXA6tX{-k2g5u>Zn=`TF$?9y zY&1pevO^AqKN%vRBY;U=lLmle8hz5d_PAUwztX5ur1rxdP-8^%{2gO*4T0V<1`a-c_6rKv0Dnb%e2i`sb&aSJ; z#`UjltuaTu(JxQm_tx>vZnz!IHj-aBTbls-`Z0JQmT6uoh zy_LaJad)*yEcw#;Wo5_`I1-&TDI~eucyx#M2jYm;v913CRFs54DoQzkpbRqknnZ<= z@SjR||7Sdb_3ih% z>f#+LCyA$HA1q+X!Dy&Q(_GiQXsy5$ws9Nofpcw2!*@!_6HmVb2mQ!#)9KMB&6f*H zCQ*jb>>*{yrwxv59$lZAiLB=HsAL}#`?)~dI zqQ({YM`cTk3k*lJHDvnKwI_afTXc2y{XLQeebz#*Z1J~U^NQrbwz(V?>ARfo!lK)L zIX9S7SzpV6Ge~hG?u|dO!Ii17eNJ=~u48x+xRMa&K{n5I6Sm{SyrVbCa?gtyyy;EG z@T=3E5KY*^yJ5l9u9^R-pWFAII$McZgonl;;=Mi2i_%+tpgPcRSHuCCnY|s&ENrdv z>Yxp1>GK8x%Jb)n@{0x3T8a{<51hYf*=(DWzQe8y+rxetm?!g(u-UG8mA4pR+f5GH z8B*(U@bTy=q&OmI7bypKTvdsqfOU`hN#*BhW>JAq2jC&q{61@eSF+LBRP_( z)8X6=A-H<64xa}7oUm}S3)tgQCPQ}qC`zjvC@!e;=riDa7&MCwgbw)lV)0Q>nQ^8} zqPHvR;0QgfFgMxUrt*+oc7?FY?tlgbk)3X~n1;7W^Qz|tU!6>41!vAfzP{wkovA=b zVsHlQl{aeW{T(DYQ{FA;AlUIFI;? z$VyUno(j5C%1*a1l6^!>omNjahn*w;)t`KpwPj;8{oxk^`blov&?Jfm=>ZM=BXC@c ze}n2kyv;}+;|X-39knRWDGWhn>koj_@(TbR-&D}o`XPqIH}LXf{Os2>OwOTUAhg0$-09W~p-p03*Z%Fb{NFu7;d9 z7*lmDNtl!As^kvH==I+wx`p!K#Z)A}?ij3AGVl%6>t#ELVer z3@Y=DrtqCw2gs5zo#0L6_&fMp7)+;ig}Nfq+IY}|XLD*jIQgXYf}Z|d|>4kV_=5;Hq6L^BSJLU5OfF#+qgoOPRSavmS?lGRVuABe-D zz`$D_@zdS~Z64g6=%jW7o1#d@>ehCTeRsbfp*$OGoAAUtjQd)FxhG zMgY*sPgLlya2GAnt09PZCF1>`r?uxky06o0VlNvBHpe1grs1Sn#-FP%*-sCbSm%bQ z3x?9tRehtXlfmV3W(&u}e2F*7P#v3pbOuhjzuug4MXv^SziC$_QejDA4e7vbTapv)8wZbrr zt&69M9z*$?Tlfl+MWc}pJf7v;xXq|rhrgHlq$}pJ64N}k({@Goz_+gy__IZ+AqDw# z8u$R+j;VVhU?A%q(Vcf%@icEw4yVpz$o9JTpJ|{!)7Q~dpJ{3S)*yWq;Vg|D-q7M9 zR{tYhwjLZr!8XapjEH2(N*y0BF5wl70vooc=XiwzivTofP$RJ=9wmtGh zj?ygMKOU6A5fC7KE7h`$nrivJ)YNnyJeH*={rij$u`qMLK7OZCF?D3ni1Z0X8pxsW zGV5dZTaDt7uI5vm;DoYuIxb$X$Q|#XAY$4K5`EgVnT80cEe-m%PY*=BJx<;uTEj;Q zD!WypAAdPleN5#2k4kx ze?Btz7+XC$qNz^!fuRv*w$Ia`+{mM=H{ynz+az7n-hy~Y2gqY3rTJ|}E-LSr^fgAJ z8{4Yp9n63#EKXQ8;oWjy9o-oHdu^T`EsFbrg_!OLe z)&kJ@aKDW&E-9^7EA0(VWyy=aImsGEyG<8R8Zs$}p9 zmY5uhyl=qs>lR8|CMhVQ!#n;4*Ja5XSw*a8W%{39oG!hSQ4an%aU&n+BH&Z%@F#)A zvBOQ~c1y0b8&+$p6{Tt2`B90xIF5$=lBjZvjn6)qQLwM`ykTs%T2$m7vgmxg2|dO9~6 zk!l*cZxp+6sru1?n9JmAbWVH(+DML`;iDjLTX<81WFGd zSdn4d%SAOSdy0FF^yC^kMzK$a><`&UO!0A11E&! z=5m=IRwv#(O^g>QFNbp0p__Bqz6P*Neh6$L4+&`TT}>-Chq!1BqwcguM@RF* zf|1=9hfm^aZslIdzXb23`x2wMJ(PZy=QNe3?=p~+sToO83nrf#zrT1*^NP8~1wQV` zQ(!-^w_;&7If2ViigDJHSi2j7c{3R!J0G)gEOdWAmSd7t@5Xt~!HxH;5mV{o0774x z`QCWEivUog45C1z5q*(VPs~b2mDLtq+ZU12m@F&K8ypy6lmR-yF18lj>*jXsg1V@g z#l2`V2es@xEO2wbn!@S(l1PIu5N~%r3OO0GL}Z`fhkxkf^V3y-K)15pW|sC| z9hJu-RY316tyK({EDduglef2lbJJu~IvO7P*_{*Q^|7>LOc9chKp+qbhX6$!O#(wL z2T$9T$4x?`EBz#q#k!Y&M=RqobU7*0Rg8f+HDO}saIK&%-q9s3)%@9{2)c;E28P8o zh9Sj&fXv2$sonP#Q*M{rW$1u#KP_g5MKN|d8G+{QvMdDCndLn-^7!lW6UgKk0O%a| z+`Esb5@uLZ*h#r1%YG}+5$K-*9UlD-idF649*2sqvJmqlViX08pyjF9lbBnI&5CRQ=HcI&MO&m ztF5jwO%|3EZ`L4&_j3V?RTuI{Vx@3~jct5Qwx48sjhGc$SKV952|YAK)b+J_p+Pw! z8w>nREoR0vX5R@Gy{|rp;8(+i`(vzYmxtuxW`=#|_|KJ5SRfoZ6_?t;m*RMBTd6~%*ruA3y$usVrt0_WNY$ThBL?mNE2l=Y&5Q8A% zp9TUdDFdom=o4q-$JNWonc2*?#Wy4C%`6*Oh>A?)g3EiQ3XdG5;Q?79{EXVNIi$oK z)tCVkf;#jaMeAs&-?w4YLeE*_Ws>~&>K2dMZUP;>HE7pxVWw2~^aD6CW+v!;YEJ>g~iGtDQ~@XP&T-Dw*2yAgrq|)D}Ig+<2P!;E&E~g zsja=zqEg%{^{`iG0b70AZXQ&l8mug?3-Q$|$pSQD$?|3v7mPo`^JH;?7>1VSKk=ay zVBdxD(3JGu76lqAePn-$5l8sEKVAFFJPPRc(n^yvt`@Mdv7AOAn0I+!%g?%kZY}sA zn(MHMa^+GsliS=RN1*IiZ0K8{l#lfZ?!)+)xe-fCknMb&VN8dj##SdyJEkDD`W0;O z?&L!RY8FRnOnDi*{dE`^|$>n2sICV5DH9HEdb9o9j@N3U)= zCw3D;p6qa=G9?qF>?Nfj%MC(pV0uBS6fJ&VeGn%RVI)Mar+f}2X;2Rnv<|a1_30dj ztv=A$An)XJsH=rlK5zs~#Hqw0$2Gte+Ih+#$Sk@cpBl5a5o6jxv2?~DDBXofpdP1E z6jI7gN0rb|LfjgW$95P}pd;gwx=!ay(R?`aDQvf}S zgXXuaf}%n^!-ifB&@J#AvuB**Kv5va!Sx#b1f6Ohc~kZf<30IF1p7zpp;?*L84I>a zB>@LZCD-~n2bFGqKOdt}VN%$&y3eL$01rK5&R`T2T!Dke?|`%3&ia};W$6T!rGBX+ zHrJl|M4qux8Ms5vw40>Gazb7MwXZw^VF)QuqbiA~ZI8^Dl3ZuX3f%$TZ4L*HocWWY z{VuW+5A*@-iYGGiPAD?6zarHAK6+Mx zQCHF6_jY>%IA;oeLfa&tW2E0PDK<1G00f8fy5Mp4s@RV~VTxf~Pfv}40n*EEyDf|L zA)aAK+%P|A{PnNKgsKYP>79ZHSu`5w_7DrMN6EqjqE)2cklI9^k7s4Z{Xz6s0iQDS zq{8FsQzcPCqR*#3mX=o3n9%q62SV0jeLx62D zn8is?p+Wfp_^&9mZ#p`0R~H0R_Uv=xMUy|C06kHucemDvuk8tBqdeZldug!isb;s} z5gEBNLg9K~mxpE~=k@jstYV9iDsrWio8Vi{i(vC4Nt$vaq$RD)Y^P&5g0w+&MKxCA za%$HEdC424`A`~u;?J<=71h(}Cb>W6o7}RbAvshA-Am+D#0rP5a`Qt0e5ued)G#b0 zcl#($_E<|ZbK8A;eqCH_D!ZgHg1&Vvd*jD7ajTlTxED=@G#1>`D8-oLk{?GA9{mN4 zR23npt0X5r-Ls}VVL^I&|YY1z_v(+(Hn$)lubU<%y4xo(Ky-&Uk#De zH}Tt+V8g8Ge#}e8Zz0%j-`*c zCaYvHzB}CQWw$d^KTD1(Iw^FZYB1Oz^&Q9(H}#g`6QOrQcfP41DP}$ps^wWy=_o(u z2t{4A%c2v}S%wb9A7nN6(le%_{`91lqSnsNTj zcd9Wo6;hB+w3SR5@GJ`ybbZw2)uTU$UMZf`qNuUWUa_8-dnXzSB5M&O*m8v7&uT33%&mL8PMVe;?8$Zd?4 zkEsL4!YJObotS$hCK@)IQ}^qLLZS_|K$Mk2(9nHo>l`2J8_gAeonwWfQ+(pZ&J8oY z>ubd87Oq-zbMg%A429Bv*9l3uzsOe)$?KsPHAgIJF9{f zI?Du68RW>_O%>J4@-lZpRU@I;bV)68a`X8~YiFzDn2W^{5)UI8y+^i1f|{mthI|L2 z+~bzoz&AjF0ie&hSFa4Hn~ifmpkTm>e(4UFrF?e4Tb=i)|5X-&?B9tCUj`ret~s91 zmEJ>jN0O?pJ&v0O+4W|kJ{vgL><=CXtq0ptLp04TW(MbR!lW4`#jzNYFl7`>rYr3dB{^8N%Oxyjq>ReEp!)+jlrFzIRHQ41ly}w?E z#pz_lZy_q?EkLXTLq^3u7|*wI<>qrL=cd}vU6jg z2fhsL>}`{lMdV_iV**+jX&@*Qfs==GVFqXsBM$AMQb^+8?p+3fS_f+!=7VU2U${>| z4En4i?Wml)TBJtyZktxOJUL~>Nbd2;*Y)B%uZSjL#y@nvXaZ;){9u$DZTaA}uj`{*``uLE7Ceod7MrC#f$s6Q(!?S=Fxe%z#$FC;npPXnP zi>19act!Cc{}3VV1!1_c%YR5L3J#5>BPo4UO=gfLAPAqi{Nt+zkgAHk2j6J_k27gS z&hI#h$wH~q?-&;wmJ=xPabdAL_Wb1Z>3~ym)<)@qpMMR2_5if>Ld<8E7!qbKJ4M4- zym-EqKr0;#n<4(ODe~=U^Ce!xgW6LN8RYF>T27WSj1y^N>Q2`9gm$g}E?GWyA0j#- z6gMhi+cpH{>3H6|O>zGf%;P^Kp+Q<-O7G|OC39qVKH(uIS9o;4&i&f`Je3t2y*~>L z7SDe;nh0XSxc}%v5zvJr?>G-PTsh%!mNHi9V-SM|Ntaj|T;b#F^UV;^>iPlUV&0Xn z?-Kyt0qQeOf*NJj0*_@k75eQGV}PY~nclEqfQ}CcR-ZO+zHWbZ>b9hO1Nu8V@U}nH zDB}{iBAcTyZkI5A7u0EVku#Yk6A~A5%y*YW|CoJW^@y!s`LZv-!%qR4Sau*8Wo($# zu1VO%kH<#yVc_TrUiW2U-Lmc2;L&S-?7}-WM2^ZUiktnf=3Vb|8-q zrc`#qt?XOjs&?Dbf3BGH_i6!(KCXf0H?H&nxo;w5g@(9hx2P0`>JY1~umdE5q`OJU z+c9!~RDmSSW~6|gmU(S54AAeKyt664$91jeQPNdoRkm!9`I(+TS1y}2IFYM>^pBk& z!}lf{S^`Lt%h+b=b)*a{^64p;(UttOEx-Z)fD56|Utvqk-^&r^6IBvR=MOI%r$VJT z+;o|0QO~n4k{!;Cl@=cY17!VQu}yhYhR({!YHV{?YBOp`v10i^*~Dt|TTEh5*nH)k zAvxkTf5J->2cU_DQqDwAFvwg*xI|NKv0MZ@y>MCNP~>XwR;D9>9#WZV+-G5-=_1R% z4}!Aqf@7Q{IMf{TLZOuI|h5unPTk*%=dR7`CH@!cU+_Y4BZAMCiAV zqta$}KY@*hc(sW=(tIWwnARCHU`Xyj>$ zJJ_K~SV^^H%8F8_pM=@XFu6%?iMUZIZX3F&wW2S|?6djMX>tPl5MR&JCCsb+o~|BO z2N1RZKkFM7`79fiu$nb>V}uxW4Q`X|#=j94mYAPZm$;u4kCG)|cn9Oq$=`FK3)`sBMQhsRd`P?+h>~)MHew0+v zE)@lBPx2pZv|Ox9__TGkM4%fYzv8*6R=%w@ZEZva5Wxc2!IA9WF_gycKUaPcWLi4i zgxeAnBu4`vq87D&U4V;tETyqqAJ;Z$GL{l{$=SqmG>HwdJ5OJ_E-0vg?G^1QyrleB zOf1?@y3&0+uMvs}KGkWhVE&%9|r^-vo1=`ut(jCcc9o%6v?0s`n3q|?x; zs}dE`0O0(ZgXq;B0lx%IMJdyExW6JR+dxo6uRd|Ad*&&&)-W62ddnlkLj_3q;?iZ9 z5`}4yRCNd5%wNtKOFpq3W_2zGmD73*^-T43)yKQ$XV$Z{P>&;Qum0UT-{p5%+$6Y* zp@&(m#O3(d-GtFGJ+E3;czU>~LNpj{HdGwji{dE*gy9VZ9n8|Sc|qM|NwwP-Wl85{ z6Si6Kv^Q_D)ifzTO60LX=FX)b>k}V=vL^bMhG4BQ?0BfQb+C&Mzb|?gIrghQFARm| z>w-z4S6tt2enVJgxQzU!w?JRUS*A|jf8Zv0ZM)^nM$~Pbu`b+t90*C+_Af9W|JrTL zc%PAmhXT-ac;#0#=XE5sl8>F}T*n7Er z_d2^Ju&75{KJ#fTEjUI0QVaT7BPb<|*knx#0eZN^L@>2mhQ@O61HR#|NH#>v(_VBT zF<6{GD?(icX96Duq=wO}n|YG_72lzd61^JEB1ffQVi3$`^+loE%HYS>Z+Y<3SiQHS ztNX$J`7=?`(iAwWB)@(@w)kd$JQvehr9$xV!y?eh1K4KFMoMGFzUM9|`Id!zpCbrx zR`(AGmvK@MmsHy-AN&FP7L*ua=f`{vD=P@q4cqg<2v${3Fn*Oo#Vn?dOkLB$8Cxx0 zMqb`VZ)86$)85vCQ(3`^2*B6KY&sF}*1P3h05Wpl1@IHlz6-F09_?Kn!m3Z{fRc86XT`L`HnzRzz z(qjvXamnMIexbQkgoff?CaEtpXJuss^cAqawea%X!4|xzbU@pYeW2xA0}|kv)pnka z6Q8w!qP&zsPMq4?OsVTkd;x-`rQOO4^Ky~|wJ?e6&(?I)cPQ(U-EH?((e!PTLl(yp zy&|uh={(TgR5Yzx_NS%b@MTTT&WjLWF+_ZjOG+U}M5kHf6ywpQD?m5!5Io;m0kE*U zs)+H0`Xz90#a?lfgyP~>?|%viv6l7?YE6bh#Eewicw2u&QU6oY6sXYR`s&f5jFfsd z3UtyS6-&!bMYY3h7IZWhJvDxeurTB3#B#xmOnB9!GPflI&EM*gj*mG;R+y`4_Yi*y zU4ialOdfK5mT`LNieTY697_s^PSFmBdYZ{ zX8%>_K72rkp>%n4WCgxHxYaE1l^S>#iuShA3h0WdP0pDtwtL1W`Xg7H<(HLsijp`@ zjKYvHaBY{9$qiCy1}zLPITi*j6Kv}E7oMXbor0xa2gq~mfHXJ;E1+6JDf~Mx)C-<# z8rWM@GTZmO#JFEoj22H@DDd<1!<16eIbvx=sM2T*hnmU6V?;9jl$}tVXfMb(9*z2P{2sHV|Mla#*nHcmUCB6*~~uf z;%{bANW8zt$=8Rwm1q0aOvK_Sb1MUM@L8ej)6%it<8~5Q!lB8Yd(hz%CjH3S&^i@y z76#1wW63I(yQYiwUFADU-YkDV_(Q5HWrA&a&_o|9#d~Is4-5{K&-mBD=D)$@yKgxO z$+TeDTlwn?1bQ4~e7wsK-ntKQ@y^B`nr*rblULR@=B4cs9d_31Y)3+0Jp=;eUItSB zb#@qT4*9te+*Lmi193DW4D*LLnXnjhMxr$zfK0(ZKmY{;bQCXP!7|>A)$-Bbfdt z+x%e8`Y)BwBnG2QS&G<4O2o=l-y~>2IGjVAPirM(n z&!Z9#(R29~=2Dbwj%XuwV{|`$qlhO`-=#i3VL^I>;`Pe+H!AnVDsp1@wbO-D2-mKj=iFSAttTG?jK5r4|@ug`pCOjr??SeMN-ZU6-$$tI0S$shb zxVGL>mWdmA^cH*Ebt@am2+Dlg72=BblK1{3C5lc7^B!5b01>eOlisxIn%5;|hj&(D zfTW6)P3EWuho-6yZ`Gtyl7+<&@sLA&-g&1-4024oQ=HZ7h$9M)s!Zv@9%8q~C@N)|JW8M@e9?FmlTVQe^Z=A=%3HTQdiJ0ZD7TKmz>I%^*H zg=xDKUUjwH)BL0&rP)G$EGi*a-yL3L4S4RLB4<0_Z?v5CdQ65@z_(Ntf_&7GAaM~Zb) ztPw+Wb6tN-_eyE~FKXCZ!d!jG{6&-3RE)D*Oi#B=GkjPYgdUQ9^j4Rd?3xrKDM?nX zZtl9`wg9Gu0XT^PHbwo}Io$sysHA{tCF}ZjODlpMn89NmIjnF(NuN2GUj~nk@_H_4 zd#XUX;lmM1{J6}0=@(Jif}tGVckuA{-I!hMdqQ79_Nco*ekyMY8n4E#B^saZ*Dnq$ zD20&s3Y1qr5aApT^Gvi*rO zYxH#j@=5(XW83b-!Rwj}m5#cCWt)#|_a@tn+0Nv{oHUILIGCJndhPwp5SllDx*3mx zb&aEZLdWfgbbsHLFDP4sl$za=v<;=fVzMDC2OAiFZpq`?3diBPMSu(7qx4B!6drP- zqr~#>0lz@#ku^7gz(?2Uh)}NB4oz=5Ui?hGc3&W4lIo3%&A3R~8|{~u<;?u{?GJ^i zC3g4Zl+$I)R(3I%41OyRC~b5rH?v<0`S=M^kLA_rFSbMmaV{RBjBL+)a1+Mi+c=B; zAjcw!g4VyBVA~jrrq>^cOm}{(x|{@YnL68#jQVJJyDxr9l5qn%x*v}iw4r2%wX zW<2am0<-R%0GKb>BZnnH!X~4SIe%iYKsmz3=(%=y05`T;z^75i8m6_EktkTbZBq#; z=Ne#wrPh4rbdPnDl^IWQp`D9ODZ%=}LlM&?A9KHFPH89dQjxJULVI#CF!8VF5uztx zq#z|uHBkU8#4rwd1v4T8gXr;YLcELQABkMYxwqD0Y?(4rej=^$xxNfT6s9ve>o2Na z_A&qJ0-UX6^%!vgeJf(YOYzCFP$|f$7QKKEVipA&Xi_hw>EE{(FmT=aA?s%K5T^@A z|1Jk8J3Ed17@=SeNjqC8Ju55>cqrcXscNy74c!|T$EIuHqLht@zjlZ#H8aK&)J5!7 zH#c2{=da&I;vv+gg8gIeK7J-_MTW|TDVa8HW7bsT&?|2o?%-F`ikVsk@XmDHJia0m z9-ZJUKu(D_@tx44fcvuVd>9r`d6&yzVJss;T@ObDi_V1~g0&&nd=!50EOsM<*`!wI zPVI>}XjQNoRLkH&ZABKEkDI_ha2QQLR}mp(e@P~iC;_9Ggod#q<{#=%#vrP+K}y0p zA}4!`yjyx60@lu<9TBS40rTqF3|>2O_%jf!oApGdTgQn$2Fe0{J3$+Ky&KH0n0n-D zGd9_Caq-@x=7U}Fx`kT3vW(gFR{kc3``%S_X+l7VDBUjFcF_{}Ex_UhDk2->Mu8C6 zy?BG6!wZ()%`XNW%yk)~a2M|$hQH8?a@KwimBqwzvz9?_W|MGRi=|~UxRSwu>25a` zVMM(nL4mQ9l>u@vMJVfhZq4{x)t{Chmo?kyE-^E6L?f*jv9u9R2G3GF`K((78PQF8&okY?f9U<_hLYo5(R%nA;WgX{82{x$G=H-GTc6G#kbUPb zL9nh!$Xx1Y?G$Mm228D-?K-Dqy+|I4@SY}(Bju|exVy0|a+(L!`2C+^HCrN3dKhjC zugYT>p(9bU^6?Dommi&X{091#7&BicefA3LIEt z#?OBW%-M7FHfdkGYH$$jm{lel#y60=+Nc$y<(v+_WUiEPZrtCfT<3b@`1N}L_hSo3 z#iR=K3FW5HomcKrbdvbl^KevaGOun88BuxjQ<$4Xg-tmsUO7Q7lrz$tQZ3NVuo)|4 z_Jt>&U;(V1LJ@iwK%P~Q!BG>ZJGN?pSr=zA%hs@J4!2uunq_^2(HKzQUg>oQVi9*}WUXXW9Xt$m*&_R&2iBD~$?1$7erlEX9!vX?I^H3PQ{K9;S`HLLxf_OALX z%BWitBZ3GbrG!YQBi%LX&>aGT(g@NW(gGsgEgee35TY~^Lr8-RAR!@0N~hd2*7x0C z?q6`%njct;dEYs)&pvza^E}V#@#mR;!v2ze1Zx9AEU>^4MKX|8PcTCq)6Bi1R{%C4 z(^S^)B&isj4t?RFX}y6F@7B4yn|cEWklY%bpHmM`7;*?xWx1>!_OIGJw2JxhP?IXx zAO$SGSdW6YYKB{%ucdIM!Kpu5YAK`^J8b|8$z~&*ti0+-ybuz+zTHXY)DLWEZUihT^IkN zV~RKVI>7{WkZez_@Fj8i9JMC*(~9N|Y_-Ix&sT5=iwMaIrYHO>+$@w9rtjglg?mm+ zUS+XAX=>)ZZ&)kNOqqUHtv-J>E15LOli$JFRHMwX?N#4(Zw+#=X`7*_8b4uR_BYNu zsR)9@1w1Mt)q8Dla~?(nf8h>ZXRYpOa%H<%5@s9{5gIM2O1XoxPR)hJ##gvv9d(wZ zlYzPlDW*iCik2R?7qb3FVda1nZGxJL(SlB-nVrqLDP8#G6;}#o8 z@!vX0FbAgWA%H<+;0~EE;SpRaHh68o%HA+uUfMxMFcwfP$`W0w@sh2k89uPWOO}#r zWu-R%M9L(OS&WMP+Yhu}%73-M3~LWtsFrl(`O9f&4t@3$;j&R9pGQ3yy?9;hRF)@F zOI_!wfXN^A@>|UW`yE8W&^v3uF7L*9ILao5xx;7yFu_6MeNh8_mG6W+t70$`11yy~ z5+pj`3kKH>}5K7mByHu$;NjQ9RE8DL=a(#wEL=mH7EmdbhEem77TEaJksw z+!dvluSyfuXYqBB8pc`S<)yC=(Z{;9;-J5)KibAwON#?`>niH|wsnQhXw&5C z=qzKsAkSl+b=5EDfYh5fUu357We;h(tM1(0_d^yPqpX76Q!eU884j4E1Zil{Q*1Ke zG=ksC%5T)NSt%{SmwA(z^Cto9JjZdtC&KO825mXzG!s1)g?BF{FsqVmsSbLQxe5Q{ z_Z9pr>BXcqna*|k6(y4b8OZGvVLV-&;;G?k15Mj$&d)K~>9660=&pLByUP3V7A@_M z6k$erd30GC-eJUE(}!Mq6iP}ey+YN2aapBp0XdFgT?>{274=Md!iMmrmmmjWM~Cj0 zh;#(~^A?>xjh3Ifrp(HEC>n!$o;Gumqs4GyQR=Ppjs!EB6gS^5pYRor0ngT(kRN)8 z-UaehFlJxxrOXq8Whk3p@a8cY{X&JtqX2%VY-^b=<)sDDqP(q5t$lLXe$0Xp>qkM;(- zmPB(o*bq}(-pc8Y?+SdO?+tisaXCAHEkP;%k!0f)=k>9+745rV(_MVQlRerRGTg1X-=SMN~rGOt!4o|{8O_1qwgn@YFa`X>e2fXdX z!S~O6Baos_)EfjcJPuC@S*V?CErnP&EutB9$EV=Eo|cfNP;0Y2=oDCX2&J3*_)0LT z!qouYs(4j}X?#p5u@$=@Y1!TJM34te87aPnFovBqfJPa zSj5-F5&Oxm=AhFP>J94Y0aV=>y*ZtlrWxh7WNtEqQ@#MUuM}$j6aG}KiyY9hSU^Sd zg##uQ6>SSmT!o|sfx9{I69bR67W4fcYCB?++zq3Yx0AGGM?eJp^v2(vurxlTn|=L6 ziozc)TQClCG1K#Pdlp<(R)6`>4r6w3X8BN)S~~1mWkaoF{J{H1c1A{Rs~ZaJy#=53 zf~U6@uy37{%!sTbW9stqvl{hY-d%@1kYymzy|h zF`9gu*H*$`Mlz+$=Q9D{B;aNQP5ENV*nuN zX0j1eki}Z(&c4MVGEf1Ct^{2Q+|=!YO7STLs8+x8p) zQlmN|FeDr`+bUNJ&o|~uYfBt9wi9`;c%=KmJaUUHp?5mI`MGV+w?S12Dzq6ykiaU& zJ1KH-h-8;|@ScBT$L95^e=D@deBJ;en|2msLbms{yM{-3Pnv=^BSL%3z#%)H;%PBD zq5@ITdu9+7S$%cSh=vHEzI|=Wu>_}QkC24wZ&-J&24_};5JVZAV=L9WdnzoUdy7}C zi*K_vk@D_{JC|3`kZjexv9*0ZF8y8?&MyvDlQ77bV>PDTlaUZ1e(^5KSCd__@@wns z@rMSWSjB)>IYwI11xJRRq!U9)-Ll8O*Wf#sYir)sIA2H`uZqysqye-=YY zvBSo6UO73-6^{0~m8Z&RzX#!^0o4QQhxfp4o@G!;e-frF546?THpGVMDk*WWae+8M zpxm-f4l~i*$Fu6H_|I6Z@%Ie-dG!)x87m@luR7%>2?JA#Wf*i1=fP!M;c9NFE|;fh zl;Z3jA2W4oCpBXAgW;fKVIt-tbpcHM?(C0edjOA*V-}~XyoDM!brNZ~hHkb7y4h01 zKvwNivwDO7Brng=r<8Pe^J^uUwukVt#+v zcP>ehXXA32lygLB7*okAWKYx3VKC@)lUj;NNGc3Atcs&5GFl4i%H#)JlF1$^7^u1+ z>o4tp--H|2R|!|5jTW&J1hrX)%cpLb!4r^)PHQVhZIY?}l_#5^?~5728LUE>5z-v{ zVm@3Axl{bnS{aAtu$jn?O|#?@G~a0F)_fsk4HzW zZcA9Ga6`lOm*T*DNUScP?i0^pe<8oWTU#b`aF!l@3;rgrEcoDYEjUd7x-D@PhX^SK zVxK-F4R1SdiNaCv zP~7E6EGU<#?9^N3wnUWmRE|H__5RwGXfa@A$d7~;K|q}GwU}oYgZ9vB)vIXI_NE^? zio5N0o6Wkp0q@{MaXC~xAatRBkeelEOf=W7hwbUvPvu^As>uC3Q2(mLXQ$iti-l*$44Sa3rVx*+^r$|(e_#?^lTG?uddrm zm7-%-2i(h>p5JeSV^aZaIU^EUDqABx`r5pTjaTEnAZQ6Wvk4?8=fbR!hN|)0{Wj9< zyV8TTiIb`fLB6zO&=Z*`K-(CK!Ou?TU?S%!kJlPTgQ5DgS45vcd3^=|apx<)91Qs`q~BSBW8)eU~Q_K)vR+^(3}ZJhOrRVjdG z6>xkgH02;bt&^D6tG6ezK4;C^6=w9iVM=JUpw`2jHMRz<S3r*F4s(kzmsmjAfuzsI@_>5LbFT|i%e%D%sFe}ZV!}meOkvF69V5KC4%U)q zL)`C)2PvJ0n>;hz*FH8koEj`nO6R`|v);8u4Czg1p~v+-+Oz7-0I1(!C?RyUog`J# zXCaYuc=e+>W=I2Wv{Jv(RCGkOH3xcdr_qB8wF7Ep>Jay}XQ1wNvF{@2^?7Ya$WMv} zf+SZ)bh{x15+-QwTuaL$>j8ctM=UM3i-#^NH=X9=<7r5jmD%WaU{+4nH|_fx9QE!uUl7?m>=;e{Sb48cQgOXmQ9!j4iq@mpQNW;y zJKO0Rxi^vv-eX3ond)URGoeHMvjT)^b|nd<>LQy*zYEFaxhM#p3IIEls zaqeFzZ`^P8*;{CBY&3t~C2qi4UQ_>BBWZY(POCrdV!`v`eAMQeK1o!Ui%`7EGX{)v zC_0M@vf!_N;q|Urhu_kC9lf?)y3vFwD##5IAQ6G4>x;iU9{PnIdlB;jjiY0lPZlS%N=IA zM0htgmN%ymRy?*I0)PzY6tDCEZBjUT7(3ZHM50-Ya~C+54YJ0IA!>0D++ z1(vASD5rgnTEFRgzmlkm5jGF01Rl1Ks;$&Tvk7f%iY5Q3-LFC z9Nig<^ejD#^sJ)zwkG#Xr%-ygU7SY`EeK9tDVHyhMC|G(k|*(121&)$yE$vjY6NrM zoSt$bv^o4DCd}2u)@dBrz|bIyhc~=q%g%=Z;Q=Lg#syh5I$UHLMi}WqPK98Q3HJ&K zubu*PI-tR`x*rhXC1$95{wk7kAQF%En7!K{>+&hQQiBy z1Q^ZZ9mi5m$lDb`-uI?ncj+l%nUY*8w{_c(be2BzXAeyqa$3LYDP>>wNgBBTIPX`{ zc5cK{O$F(uw6BNKj)c^+72xV=G(;dZ)2)pS0rlF zuM>dx-ZOjvQ74hSO)jfv#m^y()F}Z;RQ_T&s}kM~T^wC#p;@-sq1c^aT%X_-M~hI=Da-@oMpZeHLE<|SG*3mL<|fe1#P(_q;3;5XnMSf(<_ z03i$ik>(Tt0jjCV6+Q?Wclzto1~K{b7eG+{|G)o#SpF|_fIZPQOUJ7kt8mgjD92)+*7nSUDHD{4tZtM8#qv%;GwKKf>Q!3O_&UUvjNH4^mClZCoav zjHm8!A1N#vQdVRBSrhTU-f{hg74hU<>nqvfg- zhc37_n*hXrzdu(PS||f+aNR_i6mFlkUlF`Hej!kT$ow$>eC0+3$JaWEH#Bju9^H>y|iu+9nWYS zkH8uvw^5rf%B!{i)WXP|&P}N`?muIq0)s*e_>l6#%k+gC))G2edcu(MkgvhG7=Ky> zQ^u`WrrqF{r!>7#2$Tco6*^Oo(xYhm8Chyu9{hU|Yg1r#mr33|X#;x2quc#QU17er zR60#)0pj*=&r!^FERVm*EhjRQYm9>}c_?h=IQ4eXv-8U! z$N%drd8GVmE8_&{C<7jv9AkBx@b#$loRyYQ+=M5}4I{?E$dCJO)!iE*}JSZ?{NN1yc= ztAtKRA-dq`j#ZcoK1oyM!t_Q|`bB^9MU!s6GW!j-=Y;|y8D=$54?L_y?DTgb(~@7jxDjuYn5(J(=Oe5#3l6vxHJ&XM5_v<-K;O$t!~ z&~~fVP^*9ArNbNTx*hxSh}9~jh6sFlJ~Y+Ub=~vJo3fvQ@}*Z8>brFZzf%641t1T8 zC69t!R@5t&uYIN_R;P+<{>ni84kGPhQ@s2%GEW^!$FZ(^eL>{!6XgE!(yE2Tc@#}b zlk?wJ_w>{HJ18TVD~U2LdPbw`m$tnJ9#g2$(+J%1sU!cp(qVo{_O<%1%NynAD+vas z8_~*{BRk3V9XMfZJCWtS+t@BZ5`IV>hW}rSw}jZx21YrLH+p~GYZyl*M%ta~*`=-LAL_D&##e_zPBI$~i<=!? z2-2xk3YH2VsptXV>7VuJr--**YiMphQnhv7{bokbX6gH>7>8&{F7QFCwhB2bz$m?SHT2((BHXof3K$oZu9)VP}afxL1V!rh>!$X57Wze{} zUs@^P^p6{cH9^!pJvC zk#4ftW`7rg1eT?)y_|f=5z|)F*QxUSk3OH6r2O~C_J-?+drTaQg4I91_~Cx`p(yb~ zuA9&Kayk6zgE2fvN5&PZpQ3--O?nkmA{x~G`J6gf0Nr!9{KL9*k4Ef}RBSkOsUdk~ zE+ga1JMZ~UyQ^EPjnS^_s>QsMn=iO&DSu1EVqnn|%~_X`s>E7PfB7*%b#a>NYd6!l z`rdkv!>TQiifZRl?`Gpqw@ml-#O4M&Zt5Ci1qT7eS0aCUPg@TWYI$hKjdR!iboOH9 z;pe7RE2&yz-`b4ihen9W6;RA-{F-rbyzde);A{}j*c!ByZkcI5bV6o)cIH!xg^JG9u@p(J{5E*|jg-B}U=7K$dcHnwslWLCq0>@f9S2;WQ=?2B2 zCu^{~OcTFH5G?vfq+US zu{J}0Q7O4Zz!lt8;_~mV5=($CNQjgds1WnNVJ5!ZF8FtEEIZJ)8;ZwtR<9eYO@Z-}Txo>nfw3obuc(u_{P(>{->xq9tb% z?UX6~{k|S>w99ZumXR&xj+0h6-^O?MoxNVpx*GH?DnQOzID}4^%mLiWvOpy=BseM1 z(7d&m(REG#`CfOxjboMu!iV(FjFa!EyQ9pHFa9WKf>F~~q)oW%W9ISlLh&4_UHlb< z4wVN^wUQ014I792o7PhrBU-+iwtJ>*ijl$X8S&uW8od*6Z_PZox8_MxW#Q9{yw8B% zP*qG4yO|7=3nBYEkg7EPqo&H-^u;e2vlA0TIpoF1zdI(CmRZ)>DvUIZ^S%?)t5xq4C|sRK|yDELc%iJ0A252k)k zHID*qisr*n?Q4%pI`q?W9jg)9&nxnJ1oZ!0yQ9#f;GN=a=xTL(G0vT&hg-sy3GNue Nkdsn|m)$pf@jn8By+Qy0 literal 0 HcmV?d00001 diff --git a/static/img/header/icon-opni.png b/static/img/header/icon-opni.png new file mode 100644 index 0000000000000000000000000000000000000000..adc634580e0ad1fca43393605f79ee0356111ce2 GIT binary patch literal 39635 zcmeFZ^LJ!j^FAC-FcWobbvUtY8xu{eiEZ1N*mfqiZB0C}ZQI|T=Xvh?^AEf~zH6O6 zSu1_ksl97g)voKR+8w4KCyoG*0}loUh9D^+q67v8K@J86J_!2>^dA8O0S(X}I9mw~ z2QV-?(!U>Y*Ft`0FtD^BNfE(st~zI#(0ZB+b3?{8WT~l~hz>IeZ3Wd#%zdra77LYY zO-#(?tBGy)O-zQr{DL{{0L*o?O9rHP-0k{?Yaa@#IPF4^lZT@I7H6mn>O zA#j`K8SJy}2Izt@r4N*pr5`hxwv9z}@wM5jxphMyBg z!s$fw+9~)n)=MPfTuPMC&)^%hE|d%PY0xkS8+px__wWEKGL0fu@%rXA4d(*dnfghR zCL^q|V8nuPbhY8}{dNY^Q!&ykBCLol9}$pH0$_hHg8+ey&<_g0z;xY|=%i3=FRK7V zp_-__XkxjgJl(9N1YvSJb)iI^N@W~UIGLSeoGp?dJZjrq?iAaXiNh+~jI|cowj7hD zbAj}pECP05lEi>g)1%%ajo50u?>|BWx@CQJ=QRqNeP%xJ=t4{>C-xF7j5@FrN6Ko8 z3uvThrM0q^;+iFkV!j6j%QE!#A2|-Dw$hS~TeQhpY0o+w3zX^m50l6_A=JRy#lh-x zLRf@O(GlIk{AUsZk5H?siOn27@x3h?*}s?)e*(fpkZqpz zG073jgx6~tE`IILt(i2HA90xtQ|A?8hYd@@Bh4dk>7E`E|3~oxyC`}V;eo0|j|2R2 z#5j~3L?<`BZmA#RscDT0I6uBhEe3|IS*kUZpK!M9za*7CTiiL2(Ur`c*S64-a{O1Z zpu3Q;FD(^Qjn*z2H%-YNoW<8DI@er1xj~a!*HkJN-bdrF#F*qjiS6_S=WNvnuNT4> zXCyqX)AF4_Cm>#3G@IamR!BfE5KL3ib-@3PeU>^tDH%WW#b9goceG?qiZC2iL2rSV z!=!&^Hrx827Hw`_00la?*1bX$x|QF5EC^&ot^xwoNnshnh-!m@yTmN#0DP- z&^`}S5u=dr4s%r=-wb|Os4siRiU1Sacmr#UP-Eak$cVro{$9vmAlm!;xb-vb4XOkm+HycHogdEPmPqHb89x~9H8)a=xW*P?r?bZXT0KEUn1%rr{ z^sS(1*K9njK4mY;(scF^3N9tXB83ZQI9i-`c??!_=C`t}lWX9%>N#XThWf8KaAa6W z@Kg~xr30$QiPpU9D$e*B?=Oy+5Ogx;7>zouPZeoHT1C0vxHB{%d3Apw{m&74}1GznM!odxy4{&)Det4P{g(y;a_q|HU|80W^T#SlG|)CNJ@ zMk^!YVQ(k6M&KL-{MP{lERpqoc7OL{=|e5YZ^v#J>J{10r_p}7uJj2zHAu7=K9a@d z!2I7#1&9V=n;Rg^KFuQ*5Qp>PcZ!38k?jaur}kTQ!wO>*Gt|yslm4GJYhbH?=$#4R zY%9(H2wHqyfimWuHl4K&7586&ySNtcmmdE<$_rQRtcM`@eS0!123tAhmi?&so7apZ za|MQ*65#~+|GiTeSr4J-yFbf7f6dgXTFTVC-qAhMiM7+0=Iac)$HYiz)UJQE@JI_j zk6cbBp}g9oFMdB{+SO&LvEik3QkqrgCeMWUk8p70%S12>xOub!TDg146!p8tT~NGu z7sjKwYCxid`Hz$T$JO~!^<-;yb_O+RAm?|z^p}bU3iREEo*?gvAT!W(Fkt>!G0?^& z?aLa}InC3iFFN$31BWW5nbG`mLLPojEi|a-V*y#LL~Q>ND$!V1{tF zUVS=epwv}=25@M~2h0PsB>(kxKj0>bxcWR+0V~~|L<&|zQslB|MRNSF9`OGgp&x|+ zPC)%2p@c)hN9*;5y38{BZAOkwjf~*_X7?nd`M)s$IZWNKdY)+9ZreKuE+oQAYss#X zLk^Ff1u7nnj^H27_;sV|{p|T3z&cbVJonLSkY~XnbFw_+1H9gX@&+i7{xjk357_E? zy>Y>1!_h1)Z(^BaAImm%>R68iBN*F%XDZN52vN={;qlrFcWH%&&SFRRk1XFwC=MDp3#1vMI&V@LcNYt! zxG*CnL~s9pV+R| zrKp;XIQA)nlwBNqaEyN>fx!kwg8h99s$|5jZN9bsOoyt?P{*=yUZf^=JsGmf+?(j-qHPQ zAke!OgIG2XIo)yI@g#>9@|~=i@86xWSJ0Og)&gY)n5tx0RXDt8G9qrCVJic@{O@{V zk?8k+4`h)nSuCyF3$@&aQkSvS|8J56x(5dMY6irB-XBNB=A0@`X4fT=iW2eO7O7BuX4e+TrbK?EcI&#EAZ{`<`zlaanCJn-MJt^eKjz#tf9d>ddb05#*g z5;*<`7GHGXvAF&>qrhx13zT`F0dr73Cb6CFqX`u6KS3ddoB?PcOG54kz?{WnaQ$Zl z;DX3{vi@fr{weT|5dTCQUpRr3O|E|<3i20lSkxXFXfB9;=fq{=q|tjMI-~o zjD86Fk6yt7;VSb8zvYnr>r%+%&7b~NGL$e_uLy&(U(~;90}03Z|Aiy} zf39{VB9K-?1tEm{3<=%^Kk%PkvV725f>%HA*xlEs4kh`n!y{f==qib(~#ujn|c!jvfiFqt)?Oh4pl8zuvMWvTTRt!>{iq1{nL@^*_;F&P6T6(uB$ zPGg)v)?0NckqP3KaU*#~iwf*Nyd`6t1qJWd^093nx3g~tv7O31j*6c^ee#VHuhm=60t>Wo=i=;VbsX2m|AqHNG7%^ z9&EaF2CYIVs6R(atuc(uKs-ZzH(?l9sX*+=sbvS`>K6ujYsn@5`C~ESqMm84oZa9H z9U*El#%f0EhLK6T^24!U>C3%L*?N}M+=|kX$?Bo{ntu>26ThZTa0PR9bD%l-G$Jfi z6j-3-rg|PZGAsc&0FSPbDAE(iR3kxy9s#t6`(QC=+QtRge`{mWxMF=(PAs*IFDc{# zkiOYBl*<>z9T(b%Wm_=%5fa>GzU<;XEgqSSOMFfDSxf8NM_ZAN5=o!Vk|H6Svvvfg zKhwHBBhI=CC$~g;%Q4)he3m@@mC2SI=pBAotXXg+{u=`WO9O&G>=z0YgbY|Ouvp1Gs#w`Aip2PxBI{O-QOt&K^c|K6j6ueh7>=rvqLtLjW}`EMhKu`T zoYx*NBFX)hMf-*0UPwzMclm`3C9t^w3VpV+z5+3Dk>5oYnG-9BnHpAQFk-YdP?a%^Wn?sD zq*++6O2XNK#Hr2u~s;4StR8v*(92K8ZnMEW> zUiV=2kvwii!S;PWru6CDLe%_Z8J4;7jBz`Mi1*9t&*ZVq)FkOu-tugmztxz3_IF2y zUeL0LEC%Mb^#V5=!#P~jDl z*$UbG#ARWB7A>geSx<_cEswZ!+oU{LG}TR$V>Erk3;Wu9Y(WnCC=S-!&+ko%pARLr z6Vz-Nq5xL?_^pu4(DOR3{<=S#iK2iem%pmWmB?1x%=J`X#oo@Z%SeiJ)$TOU!b9cgg}N2OqrIbN5+a0IR;3g0qno4;LK11 zk}-adKO@mjmrK!2ojQHCsE@?oc_ae|WSxF%i{!^M3^FK5n2DPNr?qICI4Z;D=`XJ% zzp*BS*E)!*|4Tk}GeaPgXW8{?)JW+8tFeawTTOcKI`i?&yk~W{qq}6B@yZyb%-_~( z!awc26Z*wsea)XwaEatCDvvr_e(Gp*3URVL=T7zS+H3B+CYi#e!$ro52MZiY3AGiX z0oV?)*2No(&|xS}r1V6N?uwF$?{=gpl`rMUMPK|BCC92efRdY({r*MG-O~O=>qqEX&MqBfBzq6Jn zZ2z9AnpDqglqn!gvuaDM+zz(Hg#Q+@a_$avmWWIv5UuMX)j5Ix>A4JW~d)1>R;a=(1&Pba=KGlD|^Oq8`nN zX|`Tvg#P%Qo_STGMOG(v62nM`{KyIp*u;Cha7!O~zi7PUr8s{t+WVl?oy2pAA^=dT zZ}+gGhW(wJ-Mm5br1j@8uh6)|Fj1h0_b;Bx%bN0$k_aOyDyFBu13M9>5VR)#M7a1n zYO=NQVY~|TD)(?2kA&IL+zTpGT!W3FXEK1|P6$KTucPp-iMXJCP6OhR^w`$w>&M;@ zs&B6(qnD$MGk?PwGPqm&A2I$;r{&BI#lx4Uwl4F|8bi$wMZ^eUpdU2!fg%&$UNI6BTct z@l;Ll8gZh>kxT-UL5YLwQ~sS73f2{IEG1c9dunRC(G4tITu>gj?-x*r!iBi)c+$u@ zeQ$mGFbz8}Ee?W$r=eWe>u(_jg4RoD(R36NqTR^QB@AePO`Ix<5r-;H18j0rz$gaI z<1`-ZGun4V?bVis3zxP|)7Dg|(O5ZLuLUF9xB+8Q8jwl%rbVZm|BL{f*g4N-l$4bH zB?xOTH%}kTwDI_Wj9w~wmwrHoYp?Gz5XO=>qZ7;^dZDG$qu4s@rHw)mFF~MIOuG{m zWY3xSQlCP7xfyUoL#62c{E-1Mblw2q#WWqH^L8HyJ|jaxkV5Q1F8_GtgiCNJL9er-YwgsV|KtW#}}GBUjY+K9#n*irfUPhH zap*7-$#a>n(pm%A-=2ZHzg#Gc2G}Lg0Eh|)d|J-Qs~{+fYso4z)qquezkrChs?2Mf zQnQfJ4Trjfgoxr4;L$PYtf;aY)3;*)+gMh)`}~HPO{Hb)+YMv);~#epg?qfESGo-Dn`7ofv9m8IRl>1I@yd- z$ulL#GGxN+zQ&TRqtst26@pwqaQV^dPPuLA%IeOA)~ATbiXFFA=BN*R8LG3Ku~w^G zN*OVMF!953fDu~0WHy1fb6bu|>Q_&Yi~2&!1z<^v>!T zOx~2AuQrWw8{-s`u55~R6}*MbC6!cio~aV0@xCo|fhgdwwZJveiC(qXJ0Rjag&zrz z7E++3o?9Lp2}5gEIwzysO#<_%^*(Okp?MC7H48O@xHV9ee)|wUX+1}3^iqEia7nbr zu%X465l+3}wL^hH>)v( zB0lL_WB^nxWv({c#8UFl4`y2IE;Q=baf{X{&70n+rt!CVw8zXxaeO}V_bcHeU6jqU zuGjrqvs0D-NW>GChF2Oix$$4d*dCi z@j80Kk77G<%yvAf;~*F#ueH|BmyGPJ^?N@R8+o)iXsQG}h@~V3OgKnRp+D$a5T{0i z6BSFP6K+0Po=e0&>GoVAlD4t=&dA8{;Lnvdg%_`8uWJV$U>l#s^85npBic(GuaC%( z0>39jA~N|n??2Sh;eUt&ejr-f-uWC8%eFg6Z!=vkU9vC09qC;IF5u2RghLVKo>SSG-_j+ElU8x_$RCzvmM*T}g*B$K zZI|pm-?|WNz8=%}VA1g7w6Hb+?GFApjJ`)3#CeiETYT{k-6mdKwR@f z`C$@-U&8goql~)%Dy4ZogQ7c?gkd_ks&T+5g2UzYkC)En9;#u^-Is3yDUTC$s&e zgoH~8!VVR+$M{EV`Mg0ps*As!InVfF+;MyTgUU%1_Y;Q3BvQU0jLb1rkO(eIBhk)= zQ#3Ws%1e9Jwo6yW-o;%AC6g$w=gY)74Hi>s9zGvO+Wui&@N!pt@~^C#TjBfF)?{tB zLP%NP#-ZLx(e5Y`buV9xL2(X%ID9k;s&sQYpdI^Yxr-o1^onMM%PxX$v-j-92nbqC z4$N4Ii(qk@ovdi?yzHhCu|~f%*2AtYTBMyYN?*?^x?oE_T8TuZTEzAqD&#nn2H`zyN_T02HJtx=gmG*m`;|*Z+sei$pmUOI*>QQ4_e zF_2wk6f`L}-H}Q$=e6d0T!)mqmOBoED+7Nfe^E@BIq%1ttNe3az)Cfc_7e@QJiMG5 z|B5PHM@xE(Orfd%%<`Jo&7gWMrcag82eGkUu@m=Z`e8Ctw$mCs;`CTP`<_O6W4iys zMz_g3XGx)FLliFy1*;)NAjNg?>;43eY^%5p51}k~*Y?H8Q`%rk0Ff{F%5OwyGKKUN zHD24CN>P681{OSWu3bx(sy-<$Qb4q-3O^;em=olZWhuW&!XM_*?17-Z57)jQB;tJuDKTaJ z{a&dFW#Gr0QjiY`4U{&z466Dli2j7FB0aSbkPCeVZN6(dZk9dFyQes<9X`E%C^G~# z%G7&XlxJnw961&t_o4Jvnw;^kBG>Anm+hBaUU-E%Nc*+Y5-@l8RCHGqa80qJ_?eIGxz~NRg4b^E+`Y*>?X>ZBCu7@bmZ(ht+10 zR8-}K$@}i)C!{Ib<%h_jqQXQ zZSkl&2yCIyHmk4xj2C-2o2gZNJe-@fl|3J{Hlcgeol4xKsirln4)@7YNv`I}#684U zsZn1ay0R-HldoTeFW0#piIGI$rn%Z5<;Tm@cWA~Ev5TJR_3<#=__B^n>8CBWTDYs@JAb zla}3BtNr0{UVd9uOHoKS9n&Sn?N^30-2$k zlQ3eh=~-rx6ZT+8^!ug%(ceV8Z%vabJ6W( z`-5n*$Oc0nFG;OPSX*g*h>PVpZ)mwtiE|o+}_gHkr%y6*xbXOV0UZeVG>QWI4HkG zTj8>GsNB@&j%A$b6s=CqglMJ!hwp$SPT;3f-DP>ryh&Bok0V%7$Zz0;GNE4s8Q>Ug ziCGV#hy~Y4pB$Jv3nMdpfborb2H>z^@l{^LldlDSY&(Dc;x!+qzEafN%=0zo^W#NF zgtE{oqkwPLfAY51S!wNt&W@G})BCfvby+`Xi8UM2MMjg|>X`YYN0n^~*Z3d>k0pPr zNYjfDP676V@2bw+TfGb~uJBSK2cA0-Ys)z}?5n2)r>(0Ik9WT0J}8^M1u&p2M+W}l z$4d{~kRi`fwcjY1l{zOb7Vrp7i+cXyZsAy$=5x#w?N5VWVKpP~9D}t*hSM0?ZN_R} zl${-se>1SMZ$|g^#e2z7SPDf$X?^jIcVOC;qJhOOUY#U3pgCJvnqKT~_rqa1 zDjL_i^0|IXwRU$D0&vqm>HVafk%#@zcQxnX53)k$h)2Nbrc^A8hlvi)`6OA$zBlp} zB%m8i-Ph6Tgu7N>^dJbWOYhTOnqAj<4-<*+2%G@sYA4m{Jt?s?e3<+49&;E7$(XgU z6>{kJRnp^bjHkuj^TJ{D4E%xZz7$r4?jTzFRnvx>)MT^rD0b(7R>I*y*yc=PnEiP# z7#_Qw(j2xI7*6g`o*PNNF_w`~`(s@to-Q~27tytJZF8suWV!u)_|%I1jAy&|^CfDFW%J}jS~6dB)cfB&f) zwJG8t8{Di%duzC#L;8KV_{RnBtf(bWs5{vqD|L&9eo+$jt?pwC<=jj%cMUV`ne5Dj zrQ(z)KxlVi$~vCi{zrjVrfOO})qFFQLHp~rh@G6=;oC3T z!ImL{lt#N-9kZH%TS-Q$O; z^>ACln~`%)k=`9#)MqyTIPtL{WIM5x^eE8k#gvzbNAK@gA(w5}Z&Hb-rM0NZ zwR3n#EmITUD-L@x-D}jY%sXX^d6TLsm_A}B#H#i8Oz7dGT$cy6*tEBW(38%!^K zYlv8xu-2M(V+-otPDc=MjjT>0dk{agg4L0Yr(>v_vTRz4bz56=#i`x4;XeBo64u+DR? z;3&mWun8P4y**iL%Nn4nsRk9CL;SW=y*82r~%PH261e5VoD~l^#%8 z+yiT61T$QN>9!~Q8N!;WaNGJkQFqCLuP88hnKJ>Gv8V(dtvi>#0uOX&f_Ks@^e@NH z*>u%7`0(xpdL@I9)*QxPAn=BNCcRVggi9+Ut(tWkXmL z!C<0v=*?*&+|M3kG-wMwaByIHbkhLj{TRafHCE)aIVlsq(K*X@*{(XF%F}p%dIx67 z+Ac)B3y<`iU!W!4sgL>o+@^?e3MBOx3*6~4G3?jqC1PlOo3pkXoMm-^exarHaJ*4V z975-dWuuw)UZ}e<70V4 zVI=6r>fyfJ1fZ&QhjD{f=nw!^Twxsz+Rrd9+H+Y}RBqmRj@s*M3uxDqq8n~B=bbYp zVl`|Ib6;6+_oe1h9;5r0S&0&^T3wu{Lp*!8kCUA;#31(SBg3d7j&Qt~T*RJ}0jYsJ zgSNf`r+;aw+m?NxXT5U4VNAxPi7wx}l&qMiS^*MO%3CpQm-x)``$iI9^$`c=({{HI zKxX;Z%2x#zpfa`(Z6c+2R{*X*vITrVl%jWHiWv#YMup$hgA`5I5j|+g0Yk1@? z2zytVL~UgAOJnqt5g{Nv=<OFk|n;Er<(! zOp7C527v{J(f{ey3XZ>|qAcLikgOzdaqKzKKaV0xCp=}rQ1My2ejt0WYX4ncSGp-o z0&1oz<{`S|Cq#=a*bkYH_`YzL(fQt>T@}e)xf#}P$-156+KpT+g7LQ|nnsUu(XMDR z@PKE#7kG1XVDyHpzlP-QLe(T@u$DK^@8xr|oF3VcWb_yVh0loti|&U)h<1`A+p!ZL zY=N8XqSd|={|qyVvCA)?yMmB={An57PY=YP!LehNjRd8&22`ec{tz>U2Lzn; zB>i5iO`6j-p>KCsqkX#zK#l1m_r)YA1)%wD**@5j)IyQlxOzR1B8+j-mFh0v3-q*? zBY|j9`zu)U#wl!F5Ci@wMEk|q^+0yNKfwt`hxh4p*FP|r!BMbU)>-9_@Op(WpP`6C);4}W`FW?v(8EYwlkGqNT)*kF8KZXi#wVNp{Nbrmxg|Qj}BS5TI z^RUNs<(MF+&AyShOE+lqnK2r4@la$bm7jA-g)il^YEONYsLWxQV?vA+YdZc#!2AgY zqKl|I_nhhz5~0Xbt&%bOCT$nx(c1g;dnmRK1ACIs*ry*Y5MaL zw2u2U*2%Itxz&9i`gu^2tOaMHee6R&e_rDKx%-Z%(PI0;ckNW+FELx%h`>v6@!t4U zS^MNxC9xMugu9|9w)>NyS0u+_-eQViH!-ZS?``3yVKGi@vPOs z&R1qb@YfUlI*6%VlH0*vv~CRLfxgxqX{AI^oeIu z*a&4SQ`^4bTfwO6eJq$pGB@p8S>)Bo*tet6U=Z0gi?*%%#a z(&|V0CSrN#-Kgy-Thj$gpA6h9(FEl)K3TkEcbZaj6|~FBdE9cLdy1=H-sjeihueZ{1t?;1B2c&$sMT=b1xI zo8^{g9P~by@wn zyGI*HJFH!ue9cS~M?r*DuduPd@6qjGWx-Vf++Vz}vi=&;SyUj(bUPSB0 zYpl>Hl`FW@kUiGnR8vRAKVFas3KZ*wz}(8oSCatW^HQA5_Ogcv)ox$a55#XSs1iJ{ z8b9XOI*wDVFt4O*Md-rzw>~*qT5q5%f1j`!q8fl}la3ztVg=RNLL{GX1P3~np4@is z0?vwVw}l;!2O)AKV$u&Y3SY5L88DRQ*W!L zT% zzE!8S*$Ons3O``2z$f%KAQ*MRd9#q@RJt*m%FRl?4&f_58l6Yh)9;yYS<%~QtayQ< zZD;tKmZ_Op@epJIDyqYu@!K7Z<)s_I*=Qd7`+ON!(;cBPGl$V)<{pA%BL@Qa&^(=G z+b@d{XK=5F_o~Ou(+>%wxD=QAhbZ6#;wO-Y=qcP)%dKZo6*YCJW0DQ(I84lgbk=ly zWR+Y~f{lcYL{H>QNL&W4`K7kF=uJe!BzrVF{i#NV7{y54e!@_st@PZPdy{-vB(40B zk(S(plsF=}R(=%o)of#xJ3EQ|eWtec<(rf>_5mW>Cw8<+I-;gKk>8pTH-~Kx*6eU- z&vfgRpvv`SIYb6DM9<#NwYhyk$2W44Kdv>DHdz0gj2@qzSywrF_@ugi>Ev+WS_h34W31tP#yS-3VG~^y9&gZwc z1Gg6uBLD|HXp9%}q51VbjZ`AK@Pl|Bb6#RU7dMwgXFe~As;?YGFAkFy-Kj6cV4o9^ zl3^nE_ylg6U_nks%bp|}HfO44<>~Ms{55OwK1?%VQ+6SLY_wx6@?q3&MnM9R7qpfJ3@!w|KaV`Ps<#Y^-wISM!sK) zvqarjZtE(c2dwAy!93!A0{3A*;x4_fW|4#R^9A@00CPBoDo=fq;;wd;m9XOZ$avct zU}!pgc6KsXUXZq%8m6@f4QC_Hm^n=1M2L$_yIZPcxjJs;MuTQ>;6x~Ygt>y~y^EsI z>M*76T$>kQoV(UYjvG{Ubo5v4G8!)Yt5@0Z{rzODR;-+yzv-isxXat*AoXp=VsRd|n0`A7+n&Ty6wmkvAlzw&A1 zW_m3JcH!Uq%tp&sNvy-3gt>fK(?2syP3_EwnRU72ex4X~6(PHL+_L-trMi!kQ~5Z- zp+89={o)+UEWLb~6{70hoU}?9tDK0*>15~I>j_$OX z$moTvTRLJkK!j({?%eBG7SCw^>Gp>zu@fHeo^LT=I6bw~^gXi6{A`p)V*%3m06pqWFiNMZ>JVWVq@%Dz=8lVjb7Ymvd z8%kP#k|tQZm9hD0P4XlJTWkn%F%n8I`4M^ZwUS;l<|Ft}9^&Fsr?u%Oq6pV8(DVE**cexBO5c|QI` zB*oKizbegcq(7XNQ0v`;Ux(-t5eLiD>T*AspGmOrN)pLKYocVGp}L%y|9LMa9BJa| z(WPmEhr~y2aG|~;<<&-?_bP=yH%{qxuXEXNv=GVaIUHj5E%DsU)!FqEGG>}dhPNFC z9UWngn{uFyKibwc|LkN{b5FW`tT$Z{K>qP)>8#2UlwvA_%HI=9zIKtadXBiZBPg5~ z1l~81q{`u_kmo=yXZlzE06faVgo8r@oyHYrrmq(g`FOP4dpN1|8WC^D>PlHf@0%w@ zS)5yedBx(KFDm*87(dufXGI(i<(@e*&OABxUe24kHt*`GRKCfDPG6pSO zlf}DP69?LGTOW6`q^ANJ9;PTqqJ974pVU!vr?gthSl~rop{dU77E-2!s$QG) z@Kr3 zH4te+s+>)h=zuUV)3boDc}U5zhKVkY$&(2}&-ye!2@TQd{1%_dgY^g=&jOUjn+P4p zcO4`WZi911s@e5VnN8kre>b{tP|n&8s%fca$%Shs0Ka}hYGxGgbyG!?0ATU-_{-1H;y_q*;+Jl`0u4FdmVL z)0jY652MR{SHFsVB;7$tJ*-^uDLpvumRABX?z)4_VD~2j{Lk`;xCgr=y&gPH<=G^& z^xGjC5V_FQiXp7X!fk%TOC1F<8UuOhpgc*{mFj7I>z@ki2n8huyE(@ukPNZmXqM@2 zx-Xizfc|?V5-#5KrgS1cd9_JjQFpxio3p*Bs@fzy`8^eOV}cAPGmZ#cbUO&0mQhM( zbLU;?AU(%TF-^$T{(_reiB(Fdwm2uG(9V%lycI3mGg^$6-2R(pH8^}QT$zUgCcCOQ znCt%{?*4R_O&P9+M}qLFCv)FYG)Go$zLGBeJ>?a?f>#Q&nnt#Zi~h@VT(kCm$0=Qk z>@0q6Qfhw{1ZS)W7q4~4c+Jf7qsvwp8jOn)vaplaJ@ zMJ1tyjJNJC87`mRumO3P2;$>61V1lNTmf^RzeBL~x|A*7y(P`F zDiZc&E;wYoWt<73+?qu&=o>_2oel1}iuTg652D*TAE#CyygHmgr-+0^K^=;wsOpbE^02H z*+sY;himUA5}3yb&~z?5Ph!W7)a(M)5Ss_&5?{DDw#&GfJAaaN-apMNnM$h=Xx`3% zq!F5QMN05-xRIQ7E-{?pjZOzaLXiZ55zu?fpdEBm@t1D4tRBU7xV{Y7LrdW`*PO}Lb)9cFj}lvfbb%0))i(Ci%&nk`OIE7 zy*(#JR-5SePK%86VlgVl?3t%Joui|^3o**!xFV9CZKg1ld@QvYN9*bv>HUiD3TjBtUcYsbp4iW8gL#xtGH+X7wr_z@dZRZE&%3F_?e z=nS^WN6V*7!$SIAB!vNTnYw*!g{{}i+n}=5Y!G^<((ofC^qQR*pG;X&d&gV%@>?_Z zpzF(G;!Ynlh$e(&kuxZih`}C<{_V4#ydA4h1UkitJKs@z1D);{mqE;2J*hlsQ?d8? zT8An<`y**G5w=?IY`gKZCg`~3x#ThULmb4OEU10e5_MBy@Kk9UggK-68!zGRQNj}f zXnD+x!@jyED;C^hzd`vag2-MztqVK7@&-;@Uc2z z=H@eNj@q!V!C!Vgd-Lj=;6g`P(j)Hr z534_JWC(uV`$|!!+B?!UHzOQxC!Eb5g)V`bxyHzwV0RaV2fN0M^?4%k&HDVhj zXdf8M^XFSLuP&SKRKFXHlEx?P+rQz~QUtmuG zLM{T`q@Z;q&|lsGP7swR+zu5}HmqZyZ$BlVs5S)CkRgmM%R6y12_-!8%DxVAZT07c z0PO$xkLI0RbpI_+PS-Tp_)#EMb(uzJDVn;BA^NOf3b}nxI!SjmKUzEQgmooOu2t>( zP7?U}lgqd4AK<1_S{<+H~J^t zbX)T&7o5*!prxI|T%%4Ju#3vb?8W}+!%M}-@zT+beY8=B+P;Z6VJ}CIwVSd5Z0^hj zwayZQ3%@b!Wb?)d2g|IEKwg-du;~9@h&YwW8a$blY>Jne6x zly;6tH}cbOwfD7bJf)j6xSO@aJn5m6PQsJMd;zbV)bf<>k&uYjOo_-y*C-LDJ_N)} zqS-=!o6KS6>#4+jtE94!jH+k9HD8BSWILZtP2~0mWMTHFzL0g>OmWW6V4xEw-B`2a znrm-(ec^6@Mw<*y;!`;qJmio-&*0*Ndm)~ zZiJIlYZjC?tr&ANCX?b4<*nYTLH&+gG-Q71iCaVl1}K`+1x%76pBH#b7HJU&8j+6A z9lmEI6ij;PobP&JkGdQCuO51t=_mi*ikRP82ia6w7xkrm?=8#s#A z+aHpey9P(ahxXY61@KJ+1XQ2x*fpeL9>}X_dlUt!B2G%nN2oqSYC5=Dth_nhY`8lk z{JtOgE~I@x^QT_8gt0vzR_YkUiEO zjN9Wc?=KLykrMXmnPDXEKQ=Ex=VHQK+9@wnB#c16FL0cO5Z_LN(>UuUeMudzJoBo& z0EbNGm;EV9vsY2ptZDGYuZq``)!L)$oKCrlY(O()A=~5kn5KOgFEP74X=juuzsry3 zrX%z)3n=u3K#|(qm2ZOdq@^{_(Hk()-2xGI{wg+;cgAomt{vf#X~du~LV-wQj9owS zf_qyBlAJXAv|qm;RLY*F>X_lbm%b?jo<1RsIOZl?ny~CYD|Be__lTzXSXw1B4%$ z25jDu*O~`|Hlxl~ms@(<#SH)bgL>|r`L4#3!so&>6N&1V!J*W3pPJmQvuen{n;5(l zr2elzaZ@R% zALJnT3k1$Hg5L!Ga(*!7A_?LJ9PD$?@tduXvj0MYS=pIq#C#HJ8itpJ@eMbt{2y8~ zenL%uubl%)s&yx!)FK}VTFKSDk*8SOiF2E@syX{OzWdC}9b0-kQANE5uUw+5xinVJ zJ8ohP*+<%rt)csfw-i3Ky8z$Zm5ha!!V3A zX2o$`cOpVh+rqUQw)7rw{cuSx!gr?x92xO+Vh~-XC!}G$5yRKN2AQI4m_O@pZ#3AjcxJe2z{gs)SR}mVJ;o|@C{J_O!=xu| zt+j5)7$J<;i2}y8(Hyo2V4Sk1DS7I=o8(hPNiewMZ_Mu6%RCNQBAzAu@Xc|g8>KcL zY?+JTpUDOD7A&*DhAka8v8Z(Q9lrlepVgs!q-sQ~5bI27hO zYfc!n+=M);A6GBkD2sQpH@YvlcZmjY1XZ0%(44v>v1I-OlH*O|K4)e4DaRJ9el!aW z`5y}Q&R$u0PEvY8Uw}&;e^}#aXidbPO$4o}fO9u#kDxAwa~uOyO|WuHYYV0Dc8yq~ zGlUVa|3}nW|Ks_;eHtyZ4HjKHX+9?TY<`W-o*GR5j=nFIxDTDs&&^$n`pB_OM|S__0_si zbXFo}Gj$THt;XU=H~dQ{ajg~_zJ`MMrJy-ED}yLlJ1F?YaK0>%DMaTdP*#!}gSGKK z{G@_o+;_5CBxpJ9ZQ%)+DKa^Q>kpf;aBqJyUBQg;#uj-q!g;5R&M>#*o%c56tkng? zSAz;X)%I0v%epHyuK&!B0)07QSjB6EoW+1&3`tbX5)Jgs3bQAUdOlwcN{os zPbv%-H&zH{oVYACwN9?=^x%hDt2s7V??fT%Wr*W!!qNwvWLg-7+AbT+nu0&~i}Z`p zoGz&gjtS7EPM!0L27_~1nV>th(bUXM#cQ9Mo}k=gG_ZL3Pr8xhB1DFz63Abp@D|7GVzj>vvgyR?5j}OGO05G44@+(k7#lHY3RJke9}n(aGczcvo>yd^X|K41 zooT%3j~<<2^s`XK>ejx>feg?TDTeApxMDK?69*&+>yH|<6=!9X(MhOUvIc)dlrxQ- zCHKXkB4WTw`ohSMDewlX7nzcr`!R-7nbj|s^n|gK!1$1Fvuc9)@vJ|e`?ox8jbfPriGylyQJ~np86cI z?RHGhy-c~@Mxj`>q~K{|#Jo``8t_#Ois==a!VO3p=DJT^EBEl&17&DumD;9o?;=_v zy<7k#1tK}j(Ca*$X2Uv(1Cz`xzUwdN$4>l5LM=s9&?oZGf|M9eI6^0+Cmw)|MJHR}_qPo_mtWe_wp)S3 z>S?;Y-F1@)gbo5+sU*DwHLndBRq=${7Z!FMMn!{u;oTUA0c4dP_Tcj*zS>9Et3#ex z{e`Y+Hco69e<_=PukPKAmFF5=E{;?*JA2x{Ioea%s10*yRHZKQcl0UPNy8*-OTfx= zwS~`P*48@q3%ac}J-|#qd&=*-DS_JP@oh-+QLy8QLC|5qeYw|^sbh$SDBL!Y@hB&7 z)#Lfe}pM)`uA@g)Plpu<2X6Oru>cEeYN(e`E+&uaHQ}7Q;Y}H?kO)M#lcDq znhFaCC87hUPNg|ZxU;cKK@jmdY4^|e2q8NeO$Me}JF!UwZ1dL9xn|{Q*@7HW zV;7PO9UMBvwgOd0R3ZgC9L}a^S<$&e5Q)&jphoJW8|{wYzGTm3ay`2$WTtTzxF=*L zMi=1zy4o0+ugCXHqT%aMs!TLX7{0Qyi*AgCm`@sVdIj#)-7aN`-)pJS*~R$Tl+JHR zHsG9?(n)%{|GoBKHv{3Nb?0oLUR&w3O+!g20+W2$FXuM9?#xl^`S@x{=_K_NIgm5^ zrTnKf(Dxh1g1=N}92K0OOvkMEFGvx($y}EDP{&aM2nl~sP;>LqkrdAk73mJdK69*h zEvkLTw@{lbmR)Chzu@%pF?{L=m&!&vr#;h~#%!S-dS2pOu7AJNM8DPE%L_^uk+_CX zMC4sxp2AYNrjlrzanP9@FLKcGrquM1z#zW9%@PabnveTR;)^XQ5WC*T zBdpGRv*PwO&!VR=SUHO+GsNn4vG>&SMxXxN?`K8SZdu;ecjmo}9 zhD-P`>^I~q(nmbRJ@ih&92+78(Ih^OXahxT_>%yrD%QFZ{+W&FPLZcb1>7?S1g4Ha z0XoxWAyF;A*1~iRrzC+}*cQ5gZR=0XQF7UEK2L&xnt@2R2bN%5oB=w3;%Yt6*?q-E z<~L}=Ur_un((~s?3(?HKc9)Qy>Se6E*OO^z^eQ>#&3Iz6g9Dkl_v%n;NuQvwu(#|= z>31y250Ri{x5x>e?l%`zRG{_;R7=A z4Lc}kMuaK(4{=FiJx<(|32Rq6AV0NsKK@i;qgD@(?<}kJ_B2)$URligFafSG@znL^ zh_CuN1Ce*R{y_E`FH07Ph70q8C2K3&gMPM_gtkC$`X;i!(D7j!%gx8>YfP{CnT$r9 zviegG`H~@(PK=Yovt{+FFoX!z68PY`mg3Br*%*ps!)|`-l2EwWyqq6+_)9y(@I~_u zPcyb<@)IadQyrv-A<$iz=`{Y>{B)e#q80CC8Y+d5cPLF>rl2UVQzI}CYV8<}r@lGJ z)6u3cuNg99yWGijqmHu{{n&hM*2akg`zIQBzn|r~1MwiRE9`f8s-5`|mdIZ?a&Qtm z?}(%0^2>{Jp6Rczi16NkxCQub%%FUzNClCrYAp%pK*!ab4$3xuw-<*592=_tDQl!yR^wYs ziMR{^SLILuS*+QX2_r5%yjt=>>Me$o=kiU|CSCNjx9i`}cuxEUWO|Da+lYi=nM!(O zz`t-Kh0^mqhJkG((><%(anT?%D6)Ples$81B4gHp9BtAA6nN23K>RJ4LVct1750FBY0Y{RTip0{G3>}X?QLaJ>d9dG zHo-`b<^4$#hlt^X81eY}q|drCtH%_Qk0bIsK$NKE?2j->SjRi!WZZOQi6e~r%L}p* zUpo2CIykJ>nw9c!G(EYccB}ZsIG}Get|R%mB7lSnLqTy95A^}bhpII}FHnf+AMAsM z{z)AsY}1!X^P+-pmx*t4YZ6jV=y}veA;Fw7U;Y3;-=o&KYog+lA@JL>B2mrW6~G}0 z3bu`s=(Yo18rB~QROjgw8CLMTcwqHmWdb+EbaR`1>;%?}I0M^_ummxiUZAIsNZPssD{ zH$Fv$5|4(u&JIA=_2x1e6}(PDE?>sFu4f-g#ZnRszIx0#>Oc_f5rWjG;T`T+q{@Mi zju%u_vbe5;fV`h+2R$U$)Q!nY06587zBe*|IH;2S!t6XD)&7QaqK|GVV;;kS;dahU z*MU_ZGU)llIRf-%L+5e;NgXi zx4c}u>;C{s%~lm?A;}0wmWn5on#7eD=y`NKhIG*??bLH^%hhB4IjY+`*8HsE81dBH z-teD29d}x&HDAt5$?>NN$(qKry1o;Y}1TzK9wFC#yh2)!i+yP@Kos{sJCcY&@B0-4U5HMKDB z!+^o;>UOyHUaZYsm7~a{k_^zd%y>Sv9>STayjX)B&FU9JuHs^y@uK1<+pma5rtywQ zaitSW6iQQ;UHlYzfxc0Ii-kEt)=X|Jy|P$veXS!)yM4^QTqeG+k|H3IKE%z zu|!iL%WH)`vM5d6F-!YaqZfZAu=`Vv=VQ=;3+t3oI-dMpp^l8ubX$2;KTA!-`qh$| z7$SG42F%UQ@Igt?tj)I{`uCBq7@t9j;ew2+)kCqS28AWYNnjy0Q7z8Lr&;!PQ}v5T zk`m!91vaWfyskoOKE#(x43vd>oX*>j8523s(ww{9pFg(_Cy{DHI1sRmR7AcCJLv z=CG^wlb75}*P!_V)YQXxu)isSAx(vl%;DxX{Gfk0PvHieS&{%TA5||Sms6#7<@xtG zCWI6GlsdSmXOC`HfDlp(a!`b}D(`gC&77)ShC)$Vnvd~!;?mBozsf(IouKU`_EOus z&pn2jHFWJZFdv}@ToQy<0128iLGK|E0^9L|mmCw+hy}deD*_M!=k3~+7G=h-E6INZ zn$UR4KTC5#4BW3*z!62W<_u`5;&RI_8{ojp{JvYDGCn=6`j#w)97%oiOwjE>2aBJB zPr23YM5<8-v-;H-S$;&F!-lD$YXNU%rChxAY4li);439EQ}z@x=^rio&aW-Ma$`yD zz*&w`yT5->kF4u{rf6qzA*C0|5^OH9+C^!gMW|Oq5$_}|kau^3`K|&G5PQb6LW{Co z&MO+bD1z}1D2*H5d^xzkaR1c~n|Pb2$$*!|OH+ zstoDPs1Jwg^wULKcWXbhA-l2r>GEs(x{k53E%k~fV2m`~9Uh$g9E6}At?~~k z%klDG896Z~d0rk}K>?}j$F-vl9Ni~KraL@_02(*-cg;gUm672feW>E87O?e991V>5 z^`0KLD~$(Lh}#mX?ejWjzo#^EM1=uHOu%~MFpcX;gzPzL7_+dqs-s7zjKJ zYhtP_S;hjZgiMTmdJ%VIWwfM6X%?jnV!M8&^MBUs0nlb_!q{i5N4-ZyT(LW#xw|!Z z+m#6hYRoH#`|2;fQD%yBA~0Vh_Bu7}{!SbV|D4$rw85+*X>ZcxnCv~1crnFlrym9k zGYE5jN%V+^U!>CBL6V<+y*bb~^GIXFHK}3vTsdiSIv-`}<;qi>Z+8%>)TDm?r&>#u z>Z~ zHsSOK{^chW1nstM>wlrz`_CtzSMEOz`B!rP@GCmm*2eouj=N-gkFU~wUpDcW448-E z{kLh)YiS_b>$LBPHI6OO-Hcu%MT6xLmr^lWSf0;irvApWwR`NELstUEw z>nkHou`68SHeRu#Hj(59PWzV*9d>~?F}wOttEq)i+EEXGF!MHUA26F4)_%WiwsGBw zenNSB4h}l%hWZTJL%(md6rhK(*jq5@bC%;ScpyeejR;n0X}p^(n%brI(cB9S778kf zdudOiBA-}Rm;J|U?KE(&cQnACwyTI%&vQ)1t47EifeNo7E%{?bl z$R4_*rn?8e;3xg#l;DiaMz^69DM;qPRDIT5`9Qri^%)qP^B3Wno(5ZPt?r`Ua;Si- zPdYL8yN6R^?mj#wo-m^QU`iFwAcdtm>mL+;n5pX&0}gyX5Jx}OTn{@FxETYlB-_7( z!h8AgySoy*0)+H!_2W;M^^PwWf)DGWh;h2%e!oSRNW{{RUafl&`X@i+MF`H9NUPrM z$tygELUe8ZDga@WmeGx}=qN{SeQ18dI#&Oq4{5Y!Y;{PwuHz;arF&leK{-MMEzcz4 z+JV1%T<>1|tlB2^tLS(9T(|FBEi1qi!n5qx?nD^o7m4GQvsvm68T9kH9yC0dJm@)! znJaLT z>QfFWHZ|OT1gro?M1xfME#caQ=%Ls-mxG=DO^eussn@P|@%6jHxM>0&Iqp)W0D9%M z@gl|PE28@|{+}p@Je{ZOUO)l|Tv0;TV(O3Vb)m^mBL9A8!U)uEECyT0C55|uNl~tI zrr0YsZ=(yH{ZG=H1&!xNN$8#pPuEp6XHG`RWa@l%IphTjSxHg#P%}IZ536WRU9U&J zZumoLONntJh`8A2aXBT?5mcn8+)KjQ{)Avh*zfWfC^igy-dC>uP=v*gMF|^o(v!iO zo@!m!Lv9^Fe8sZe#`$U3n?%U!qLFF>g$^2!D8xmN?Z1;olx0$yx)RjkD<4d zeDu{K6(G&Gs}T~_e{WmEW1RnOo-Bl1E7{xM>OJ^UlPnHBtsw`&Jgi^KLtFZJDsPWO z+JE?u${U~AA$8!J&#V0nS9jl8Wpq*xg!qe^^`P3&^8~vfgrifNoW|*2h zDz0Fs%Ey}cJKGE&O>UiGP~eQU7VvA6$jb*uePi3y)NkW?gQ08hIqpoq2zXNaOmOM@ z&6HF=$FWSP?3{L6w|$Bj`<{TO(@syjb8YGzW3Z?$_hvI#dO;8t9fbAT(M(UiqhhB# zJ!?SSO#Fg>)edJcZ`psl$tM7Oq!guMyqJ94M$o`{>=Pq+xG29^*OXPpzcNEgt3xAcq9d0AxR;eDl4{K$>wLq zQP*@xMkqi9Bz=O@8tSt zY%9>Ya%%p(;2eV_-T;6ocxZuWC>Cnsq_|*bXRC+=q|726;67pZ!7=L7hQo(C8lV6@ zBL)&87fksz3W5%Yy>Za}jR73v-R~F2(cT(4xLpSL?fJ#nZX;PJ590=q7Y2nB>CqxOU#<3@tl&;FP0=? zj<;ak}&nUk;d zi<0B-7?Z`PHf+_!Prbxobd5!)Qzha0gC{;0;$mQa0vkOos*5m%#jCLzm>K8)O3~v9 z4+OT3v#awc)3K2x)tJI6-zws;TPzq7C@U`26inOZxNpm^c;Z7si$lU?xqP?Yh0DPm zv+1X&6J!ei`d7H)qRpYFfG zVA@{)9RaZj9rXC~OvrH4o3|*eT}p|FR%XRD7`KcY+*4ul@pXv-z;Igz2d%!|ckRYj zY7>HfdM-FXB9*1uXlG{k<5Xhpy9abgtinr`QZkz&U7kR!0(!YQzdQ=iu-Z_h@8r4u5S7@VJs8-^xZY@e!kkpQ>`}gpc!w=HHd`;x z*?d53y^4DE2;zC(e^&r`u0y_1FQzHFa8oZTzXM{3R#}dJUB#{_a_(lqp11QzvVi~I zBa$&D7GlYxIUY)ydW{tq`JNo`E$1BUJltN{mxSs!Gt$-ZxSKH*Zfmt#nOsD& z%gC;ny{+-t)FX+DQW-3|rWFie937hxpeP;So2r+^wantkj@1b>|GIPIPclT+;oLCg)>$?ejjwWk$@dVQk0EvWHPhh7-8) zHb1)%a3H_I!@uqwfG-reap9*Hcy^!Eo%xm zOxiHf29QEuB)+uKa8EY5k`!R>_6@&W#*A0tx>cB_C&C21S2ka^gJy&%x*jPO+8d9D zQwcTSP)ibHe-q`pVKItXx^=!H2Z%37bYfWOQhTJCk~lhM!2u`bUyj%LDq)1E+E_b? z#G>8k(7}Vo|!RH{Xbl;6vDr@ zH{+`>^5b~`A@S=(20w>t7R)xO*m7erXtgXUq>4ZxD1?}Z#?y?&ie4KZVsF)9#;BfSYU;TbBSzh|VZ4-(=LMGq0~bOSwmEi#4yvS+PpNtKJg4dJ4O1Lx(MxH5#^j_& z&+HXMW4>FN6^5VpKv+#Cy`!X;g2*g;$7ZLrH2^TsVE&sxQ%Yb?ZHZ!w-7~W`!T9a1 zWqbDN+Zytgl2gKNkMvU`pK z;t7|$XilSFoM(F?Ras7Y?zO__9`rHlxq2Fd%W2?Bm(R>rFapjO%m}|OGn9g{*cdaIa5*)_3X{KA$F5dXcyF<@{dDl z$a)}Q5Su>{u#j+VAO-#FX(6}7uJEWiA`hA;52Csx;D_V>Vcd5=-eU1WD2*JlE?L=0 zG27AX+#O2U*kREmIOgg#AFd74V_+f9NeT};rYTb^Wj@Z~kUFZh;E-{Bi&Pf*{Z?ys z>5yrm(WLh6a|p&V5YWHH+12LDbLHLW)QbM}^9F6xAmQ)Xtsvcg>mKCky|W$?MN)e;bH~L4*|Dq`ISx<9F~&8<{^EOR&$Gj=bA|e!`-uCxAsLPrLk0nsw+7RF)t4-8dhYZr zK4QHCwtc!5bI3C=PqYX|R|tM|LiP`_UQ+}jGt=FWZ>`Q|=8YbTUR83|80V%Ra6kzx zZNL8GC{l_)YcQLEb_`pcEb1~cTrq+eQU@UL^xD?o0Bcb?7lJM5fllka6L0`ROLLFt zvw`MP?Od-ZarVR;r0!1Ctzuy8_=t&MH@d?rW1Zq)H;j>M-bO<(P5G^5yqT5|eB}ek z+$fe6EV6!>-h}tKo{b)3riA_*}9F z3=}*@I-w57sRt#~b987xM9-~2Yn%vU zhx&5Nj2st8HJ0}0XdbtIGVd*GFTm|L2^glYe;Rr~+AEitTpUvTz=?*J%7gJhE||C5 zTz!UlWyBl5EujRoR3;J6b-c=I7Go)hW3VeOZQ3@s4x7))yXucA0Il#-3U)**yPMt3 zj__4Hv{m0E0!?!-w|@meW7H3UzU}J6e#cp)#g0HyS}XA>3PANYFQmG zRwu&4=fZE^sG8EsweD#XaM=KIQuxJyI;LA0 z{b)ESN~$;{j}HxW;EMduY^DKDoAoi>A5Zl^p84i`!mTAnpG}agElFf>>rJ;D7;b-PE&iG+O7{VxEwEk+Gsh1PD0_1RR%Z?AseL^%4^1df*;S)7{|$n1Dd+oNr4 zQ@<|OFqR1v=vuM~@=)j_xyq7Uf6_U%M3PypTD#MOu#y8~waVdeYy8&0_AKTlqpu3B zKb5s_vT2eNE-fE@+mhTXXqR@w+xIL+)a@>tII5a8!A4`o#J<`XI?Beh2=t5nl%sHk z8CsVpnjJ1u4rNJ>qQi$4)XOhUw|S~rY&w*Cj9+zDd)ezyDfC350`JKLIr`M8BqjpK z&llDdY$+n~k=X;)*J~=#ov9q36fcU#6`xQo(NohZ51XSmaE;WiF z1@O#HF5ypEpXx8KD~&W1;M5O*G<))8kFabqzfF4hEJ<_S(yx~XCQmIEGr)5Oc(g_c)I2b+ zh|_aIAAz#n1y)O&%-P$uAxjgJ-`<9fy=?#Gy9eQ9#{xrnm=H4<3Edgf0tG-$PM2%h zA2ri^a6sH*NHUyi>@Gn#$DSvm-CBK`UYMZ0Pxi&kX^Nrv{@TdUT|P2m-T;aDw1S=G z5Gd$$7)qPj3g7psNE2L?_itqER&)2qpD=g2s4i-N!5w#W+IgOdfP)TGTj zIw>a~Xq&u#cxBZL#-1VTN6q6$RmLk3VixsM9|2vQqcWU-Zh-&iT%ItiHWLt98Hi~l zYo`A5oTM8-t|RT9GGs_|vU-e&7i6N-63wWe00VU3be)XbP*ju^e~WR@9h2u@n_oy+ zMeugJ(>HH>WP5{Jo%1*&>%&D!>p}<*I?*HK%%`-2Xxi!kk#{?gkWnNO|KOzkp$rqC z(}!|(IuXLxIDeA~_>ya_SG0E;YTq9^*cbextZs1qw0K%Z`c2LL#pY;bfuq%427K~> z`DhvcTzCdpQYuDMH|Fb)gGCq*xvak)v6g0#ExxQ87-EzgO-P-eJbiG+2t$Vj zW32iAzlS#LbeD60tn2sH4C&Q6_l~s+Bkr|9I;tWJ5>S`zIL&)ANgSK*MRraY>N7DWp6Scx29>Y&1y;)eMDO8a#$GaL1 zjl7{8$`dl*$;6G0G$Z0ZrrvJju_ujHq#wMU+=RnMy-pD@98cJa5bvr;V>l~twcjwH zr@vixE(}!(fh{MrASaDY+>JMH6YZj(Y0GsfRArI*kMMTo;V~GCV0-#xbfNh{K)HD3 z(R5qxww`YRPy_3^nRr9hQ4r~Mol`P=5}QcL-lZv#ER4OANtp56DyMvl2i-WC$_#~6 z6*|B;aq~q^O!pVjaR0;JDNmQ{pf7PRD{3LDe^C=16_N7GC{%V8EIpx+2<+XGCB|s+ zf02+xK{{_kw@)rf7B|KuQpG%&$Pm%uPC7N4qnQ-bONj-~x6t(f=_A38b z?g{9hGrTprL;wPCJxWVU)=9fX($JMQW!Y*iF6?x)I)BSPiVIxTT)f8{Wg{&I9^4w= zsui&|RQbuBHbz6=t9N4u@Q%KyO;-9GqLTMyioj#&O`^Cqp};uw z0*Jb8x@2OUjI+o5jk9)KPV9K?PZ;s?TzErS1o7j)xz~SJc^*t(}K3kqO8zP774M0 zikodW3We#~PUdE|TN5S(RgC&&T5C?5095lbnpKhLiR@M$|(ATR>_zs~F z7kfylm!+4*5VJq(f4*52znxHf$a*L~gYeU!K*iW9BrS$=0*)E&X)g*-M{q^<)S^-N zY4{VU4A6JPn|{71XT`i72Qj&Rwd(~`9iEokSZ5|4=P4iqb*p#CAfENeb;UR^U8^JS zvz8QmTy>5_Z#Nl}U9V@*m9G1*ruF zEym;A$6D_L3)O4wV=xtuC6@1BmvB_dzz}a9cmf1;xk70zTiMs`_cTVN1mio7+y>HX5i5b&)d>exAqst16nxg{@f-<2vMyH^p@% z9ufsWNbOn5c#<(E(-?nZY?1w?494OV+Ve55Y%vw#H?QwSq6nZGZjO1z{#zIv(mR`J z0mN^GyP39lzW91)9xW$|Ln1JEUn4wN;}FqPMt~E?nbaCWP&+wu^UkX+(F-bdN=D)l2L`Od3dh4TRB4_adbjH!aox>aN z4Z89w6@ttrZuWZumAdLv1;AXFSMHkT%LhR_;-_Fl?1UTl?K{j4??kXO1}2$=G?8P^ z*dHXH6K80aAeFpHKXxTr+=<*tX z(LeR$QP$#|A{^_iKW?RBJIQ9cT0P?b9K5rdITWf5lr9I_MdwBm%ZY3tpjm`8=A^0p zhkckdIL?_j4{}0mr?^Ef7$QsutXHV**zg`h8Opf@AKg4;nh?4%4Gc2=2W++U@IKS? z+IrelK>s|_Q;N|e`turXXuX=rwusM}Q6M@CR9Th!#yw?~DmZ*rDb~Yyt~BdckS5J1 zK24r=2&&-8_tbbMKmF#WPQ*_e=$lc%RYEY>7ehHD3pD-&82RY# z8-TqjyFfmLj->y2@ausNTSz(Cq?5%FpRh2Pai{ImAf11Q9AJB5MNti-lYMRSlw0w? ze~1J3Dao9%F<9t;9)o&ZmfL>#^vb~8WY;Lih`mfZr`WxNgy4>x^S*UlkPmWLZ@cBR zrC11TYig4I;Gr98wsrC0*yE`CeLv-0}MP@v||N-Q%EwBXUQ-R44M&Oi^T zIfLnu4`i=3;q^196dmI~$$0I4CcjX0V|O}!eWq&$oIrQ4h0djf8Sbu!B0CD*D4KZu zs9#;6_@t%&BRsFb;jDJFyia*Q8Uwp??K0FT$x$&#Ys*o71^hDLbdCs1Gy|-vX71|U zYhx(a`$ipEz>F)^?)$R;+P&n#H?B3rL$T^;!-MIhFtT zeU`9#n(y8-3hFOezw-oXued_!KCw9c2gChnLwus-PaxT5(da8sKWA_MT(}NyvOeHy zZqjsq5KOBHuKK#ZGGxy8^XxLpu~_Ds1t~>*&Bc7_Q(}CPl>%^s_@31GxHlpBmP%(* zsCZAUxtixT4V0j*TuL21?$2WwNHmRIPHcse!^aPjjHlX%%NzFoYSHcN*U1?Rz=U?* zw5?6mRzCN2BX9Q=s%9*H!B2)54n5-pAVu;P^B3lLl%F&0-EtflHbmww8qG4Lx1Nwa%?kPYWNlDP>!n)TH^5`cI}L+d1J2#Katqw<|!kZ zX7~<1UQNg99ez~*41l7GP#U;+Sn8p`;3yx{=D|c$UXsn+(liqmR^M2z2vm~q2wHVD zheOT}E?O)|B(u85LYe>z_{6}iK07Qt?@T30^%&Rzxm8TXaf?Dha9x2@p2&nP<6r*h zOV01=7+PXWE(zc%L$i`Fn3MVUY=jJ`U*BUgyR=e}cF=z33mI2C&|!61&eO^~TbsG( zL2HiHrM0opf$Jn-^AAZ{gw|TO{OC5I8&s11`mcL*i?0h{cy_<(W1)jW`u-!*GRHrMQyv zX@kl`}*+u}5owVF#|<%>z-XMs3p^KupWTIYYYPaRr{XO0_3~sd(Ex>m^bi*W z)%q|a9>&r9c#6u=uSxZxSMyy#G_t?HwMjV4K;<90j3`oz^} z%e3I0+ExYSXzjGZ&NUkM5R4t^E((OE-Q{(=hpQ0@!2B!7?Az@ulJ=f->u^l}gdgT} zH$)ffdZH{wVGh`{X*t*DVn9V5b&Lfd8 zLi98BZQCTR{nQBl&OAJMTw}=d0D5hGvQom6TPE$Mz0emwFN6<817?JFow_=U)+WuG zd@!)nNsJCoFQ)`kN+SyRES!c(mH$xm%tE&-A8$xYM+deS+|Tm;R=%&VzR2=|JS&cU z2t%?Ow0!#d$b7ZG>Gc)T9^4mXAen##mOg|&lJcJ`WsQ#j=%Wk-u2Rhvu{v#%xPP-u z8>q<8Eyj5Nc3vYCoUG5YMoqtMY|x}PqnRme@wXbeB-He>boa08a`IAiWxgD4$Fk!& z-lvNg&}{DXWH;?g0kqd!oS3oP=lYcRMZaNQtK`4Rm%RY*<1Sp8c$vnF?1N6UZ_K5k z6g$j8KFq5-$r2r42=MQwC(Lyw)OFM6zu48$Eol`Bk2Qp~#z4z9T>D{KS>YeoUgoc& zn-V!2t-=ZNMj5Qyne9e5gT9=H>iz!D(N3x}xGn-o z1%`lLl1iaLwrjOsR*%8!?!(k9M5{JB)h;h8?Ot_m6w8W?Rmt(gR~knHX7x{NO?kC5JI` z=&n#2SaENESS@kDQZ!0e&5+9jLnL_b{b)e$5?oGhza)$v`Wy`v7{LJR12x0`gpiPa zDhjk05{G z-zS?0-dnH_E&JRZ+2fut&#jBYfib!F9@ZKXO=2~5VDnCsJ%l-PYGf84%?@Y*SBT~@ z@KGDRHOK1qV%x9VU%j;x_M8W5oXAc(WN^|r%>S&WLEjSD=*$IJ@gRa}@7N2uAo$k) zfTz>9aTy_ zhOIE{uF-Vg(Jm9xG~N1JN#jX-0RE%(qNbv#2Dv8$=&mptfZ71QtIpLM{I~dyR5P-SdxD{O9>d+!`qAnvq#G1Zif4;JLlalW@m-?ABDo0O3qonD6voMYc4q9 zpEt(CE-%lk$hI|(X$b?T*vu6`Yy4*w&M_)dAm>18WecGbR)Khn7ZsHXT9aMv?5Xy! zMeM%(O<-g&fg3MLl!$}Shef2RjnZ42Yn^q~8o}8Ud8Hx_5Wt*T+85$|FJ>?edgeyL3!4^Ry#mr#=L)Xuq&u)7cr-7XL6SI9@@qz zCwIkgL%3a?%u$vAw*M5*HqlR4Od3|qS<}T1oq?b1xE(0ot|uwSEAC>$V}1lSbmBm3 zHm3Z%m4Z}6(6&qz!1vn5*ntsnh`wVo0o)a|Nb`B<+9U6f8ERH8)1JJ{bIZY0`r=_u zn^YVQc+>Ryzte$Spn&EA%oZy1+o<4FvS~knzxEs|Y?*@Zc_%Z>GD!xNH|RumZ3Ci` z7>C?^NL*dVx|7(IfGtp6&M=t7oUKm4;wmbWZ7?V%pX*R3AN1FqkY75`2SB*sbD}cLb39zO(r;PMjoTc2>8iq?>CoLd2uASe^wel#*qu zPqPEL$ctmt22`K>SeeCtK&Wp4rc7XUVBy~C4(C3+15Ysu@cJAun7m5Bs&PmUFr``- zUx;OC(&q!Y0jBQicb+osg8yjWot1%-zIQ4qBZ;FgzB{RB)gFqLh30er^m+e_+k{pR zaT|yufa=%i?k;}EvSBO=poWq@8_S5Pp^{bcH3C*`i^BsqWiGqbAS7o;^;^O*Mzo)e z-34gUz_*)|-(JugV}LdR9Qz&AeA1Sais5(nd@#t}FcmHwB4(0@WWKce z(I9@tn`r;^hslze_;eg>V5_?z6oTT%t8$=F@%jVeaSWDiPZZTr4%9b?g3hDK0tmVX zf206mp)UM4f6-8&r=#Wh<8`(_$fz?WInN4NY}MpH*xZyZ-JqIcTbKQS_d%280lVSl z9e*Eb$R18qd5uuA=z@E{fwhG51=(JS7OmIsi@%RCkBo4vsaCSW1KIk%P-3|Ya|{;k zf9{*;IX*YyT-VG}P!#fQeMHbhw^hMv#oxo{#am}D5!4oVedv)uL>_ulOej5x&?foH z1^-vPORz?AWx+YC_PA`X6Q%y^=n{y4$C6h+s7@H@3X|Q8P3LC81}6O#J^RVj4`1w=S_Q7C|& zgHI-NuziDFf(sA;D)82?YvFHdKx8SZOaS|-GHnU)ml%C*0#Y|~&BeQP%L5ieQIYtA zr8KeduRXAff^(=FgH8n0v3lqCnHpTqEzV@bD*of`Af%l*;{O7q}M7 zHz2Rf(@9nv_>Gb)13%!RM`JnQXEH&aaIvRj zDV0oBEFdO3TU$OPOu~yYeQ zBtk>hj~06*+t7rxSX#*vB^2M|%k=yK&#%wC&b;P_xzByhT-%&;?(2HLVS(Bp<$Esy zi!p{SA~>8cek>8mw3vFHnfFs~^3hy_uS;==aDLI}B#OBMzd61IlvFi%e%0Kj#o8yF zy^&aLm*a4*ywGGuh;{%d1_KmxFXuFyy9qN|AVDqA9hA0VKw7tcR6J3@mT2dd@~rRq3^1hnq$j{{U+Cdmaaf61lx4X zycacb@EKS59L)JXiV26X5VP_4*fH`?EUfRk=3@;v(;&t4YhEuQd7(Bgre7+-Zm{;z za?yPGCR);htd4tFL+D~+t~LgSe%eK3`mHKcebEUGZhr#-RQ}iqtwZp;;hGd+sciro zuaM#r5!S`9&<7CLGK05ukGG=xJ^nnEc$zW|m|326R>LOQo^ppko}iyl^qX9_k%ncJ zV!R;Uv{oDwrP|6JG@KOS#EM(b0C=aKkeg?%y)!0go2_Nzx@l+-Ui`|!h{(8ICGL?+!~A0FQOlqA2A~V&_OnnJVD}vWa|`G{=n@`k z2#M^Dol3;%^lJJfywIVio*f&uc-oVdQG-=5dmQKgyVlq#8avBr5|%pyz=~WynY5C! zy@RZ7`CtE(zUkOwtoZf!y%4Go!<`R39K?@8c6tv`5jq@j0P?Pj2m zp*T2ySu~rw%Q-Jp> zDdC&XJTXssf+ILoP|aqWNcV$#^F2Q+-&%di-d~8xddFVNe|OX*k-IsDCY3N#+WOrD zPduV7EDGk0aw;0^`;2~HU$nS2X6TH4RzSE_RSWozuZZ)wA3YK9TNczVRhBu&9Ga2J zR9J^bgy4^+YCd@is@Fi$%B{8*GZELLF9hf00Z+K?h=%4RayQ{3XTzy&!DtU||E~)x z2yV;2QC3Pw_2zZs8HbdUqLWElP4f1~EyAuzv)nRuvL7>Avb;96%?N<+FkgzOx;%JS zfn&5rGLN8cqb*K>XKI1Uv?4Oy?6P%k*$Stc2RCyb9=cEvrr*I;jy(Vg-!2o!o-vmO zFwZR>Cj{!sv*d} ze1<@Xx~=4w}$BFvqYbCba?ioP0+WU)D4@7}VLOEy~{q?Kj4xF|SKOJB$TWNRKuaen#Qvd3Jd zU(Ml<*CCCGe9qnj709rBnGq)S&BrNSiUqtXdHPzvRdI(7LT!74KwTkUuwP z6jIubN$(Q)vNm7bD0I|T(H+eice8_dCP%ryoOr4>l_NfL>&dgm$W{Ep2jpXUyq3^X z*LCuIvLm52M1qx|AzM9TF~KKbxOPP?Qn@%xY|asl5}5I6x(EN`2P30aX{-$&&qcQK zzM-Boa}|Pw6G$gB2scLemv1k_Rl&hylb!yY?4HqqUcHsTsMnhMGBwc%gJx5D+>7&U z_9Lk2&c=zNDJz`a^`%$&n8er9?cY{84X=n-r=8>i5hW}G;07RkEfbk;w1_|q@U4qU z=YKm7gsgcmX+xO6_ef5!*O$Bj@9=I5g@bh))Sh?n00>7rO%>GDoPmPv+fEAy(H)gh zrfmqeTRRPD;>G^=4$w$&oJG3JEDUH?ckU1>->Smsj70Kp+e79-0IAF94;C!{8J@H~ z`~|5&&cDNh!T0`eaOVG7>{E@6*xRIQHxmDVVg_W}J6|&+TWlj#SZqwVzfA?C5QU<0+nF=? z{FrtP#36AH8BRwP8>x4e`|0}z#ZuCJLFLlCaZ4!JNNpPtcw66~@`{?$4xh$I|9k49 zN=z!Wfc%-6Vbt0coFw+P?Lj>F($x{)Pc}zJzQ2GV^D%d^?oL7%*&-NG2PGp`*G+$d zHPkmboX1G(ypX5~e4Syu)w-c|2gq<5syE=pC3}j3*43MM5tmpVD8B?_q);y?b-$?W z_vwK)e4g0CHq#@TvwZ-M9gfJ`mAf{m6#LdHhzG}$%MayKj4}Oll5TNg4sI)4EzsRx z1P(Ivw#P~yg8g}9hz<1l>X6b_)>&`g`hrKDd3Va@rY)6fguq6X?(U_B@H$iZ%#MHN^1uJz<|QEw(pQ4!9+C1s~+v)b9};TZncd`U?4c!Y{lGO<~-QpQTY{zZ-kQ z$6qg(vR?#`Sz7Rk^OPMWq~IjtEhF#l$^v1BS zYhG|7bA^I6nfj66GGmeD=5mdean*?jlvwAEG@T33o^3>btvw-D` zog!4;dU_;?j`2~SBqN-yAd!1_WowUW$qKLAM`oi59!d0QQ zecL|8XQdTY8ug0;5oN2F1y8^j5wXQ%5lLntAEfhcpli(HdRz+u3xj4+&uz~zw2Yh# z!%LaYL#dG@S)Ht@q3Y*Jk|rl5mo$fu@LyR@9(C5gw_fcvwffip0uf$Oc@pe8wo~FX z?l+D@0+tzG`_>*RJ{7nU)S61eezE>B`Jbzii%Lv%ScNFzc+e~hZ&Cz{mGXxb+J=R= zYkId>_nm1L>l}uls)?a0d&viA@YR6lZI1RRt6nx=xqm_uG)~dPbQ-Ujt$~BG9T7wu zF<06i(ptCvs>gb zivPXu7go-(a8+SF4Z2p9{HZf~t!w*as*bVvoeHMoA!IZca&>yqpDGE7qXme#7kJOtIg z9xmd%`~QTz2Bq?nOk2i7Rl*;jmu)hBJlCa=`A)fm6hb-Uiu zjfn<1l6@M-D=_|e()M|gXF>IE$k71v1WZ2M$ z?B|vTMRNRXzvvaS%WV27k;c~igDkc}UT?kjO6rxgP(KR`S&yjst5;6#(B*f@g&m4( za}=C-uI<<5?|pfC>>2Wt#njIapNKa zGnP7>zR$;riBEi@sa4K#sDAb9JlS^avd9;N!fzt8n)5x@-HvT~EsuvV#2G1d?CjZb zM39|B`oEHcnk=#Mlf?0FM7JMSrms#1PXzQP>*pMtH7Y2D{b0lhSHMYUG?iAh8B@x0 zOn&}XQK_?Hj3$5O0n?0l|N2q`)j)uh%Hi)hEQc)N`6T4C-%;ou7sHW|Pzd&sMr9D2 zGttOK`OojzY$Ykms{@6Z`S;oWmG`zYd*Sa4puMI2gXzCHn)_|u7gXZpplPIj>935i z@e~?uluU5zO^~snDQ?cR89x8?=`kwq_F5g!Xe#U`9?Sx7_Cx}p)iu>GJxgTG6*vF4 z-7U(uLgHbBFT;ZHkbp4*sgxKoDRa-INj1+?0C4`KEhy~%25eY59t!AACeiOUz~ad)rkf4T~|>y(NeGBfVvQ}4{dbtj@`$F-Yx ztjL^zOsxN1qbJR}C#rYeSk`pnWB}o3v)b_R6C+vU34wW0R^5zvT~J#QyJ_$yXT2ns z&z-!VW3X;kxs=_Z#NYUdu)zsl&ZusTPgAjRjA*5X8#p|f zan78kSe2xST{0pW)z zxb5W~R8w=dFvac5U8*Tsq literal 0 HcmV?d00001 diff --git a/static/img/header/icon-rancher-desktop.png b/static/img/header/icon-rancher-desktop.png new file mode 100644 index 0000000000000000000000000000000000000000..2a204e899c94615e003c06290d8e708e6af040ea GIT binary patch literal 8307 zcmeHLcT`i^*1rK1L{Y4O6eFSt8WKV%i9k>g2~|T!1wwLxP?7+F1dyVrC`#{09Vt>B zsVX8m3IZ~KB1n-URRKYzNq-kCw_6BWP(50RR>TeYAPjLp^f4H(Y|gs=SQMm_!l_&TysMQk_6J(~|+hK{nME0N5SQ zC!IGxf%7a)NO^EQ%wdR>d(_f??ccI4(xY9ccdr3VYIt}_Iuv$Hs!%@P{^Pr!=d5KzSe+SRs)lV77YL~L25C-D$>N`TYmUWS ziaYk`a8v*3lX{)g8y7aP?9GJq*Uhw7WX!n540Rq+n0TTgxi7%m_9~1mC@`+hnBcCw za>lOOw%T=@RZXP|A_{4L(fl`G?;>d^x=U1b^^*j3_3r^eVI=y5t7tt@6TIA0s-0oP zg*ez3mKGGS?$p*3`Ny_OU$jUw=8n`^NhPR9^z|LQyGvxzJP&tQX>_Dln&$!@ULd0% zBtz_D5z>xdXNk;y?4Eu2#)tdMD43Zb+V7uZt|`4$0Nrl9_e&7}Jaxv#$R<-&GKjzq zx$XVsi?2YxdAGY?2yMR|`jI4mBf9I?D~eaL6cI_2sxB9I?!WXUkzJpG@w(vqSIRCI%?=jdc~hNdf9t8?(^JP=4)&zDn7@5cWsWd4pG;o7^fs}yu-@H5 zUCVl_4*y;?&7J$BNP9KQHG{71NP4ES7!YD55vyCUO%S{5cV_}A_SLBBe zx$9cEd0C+KRW`M|G@o|_V?J=lp!rWJ)8!e?7r$AK|@7+b$%R$ zPNq_DYfZ8&C=1$vSOgY>#v(8zk_|!$OOZp^$SEj-ib@zz7OV6V6oKZ(B+P)f&kqnyM z+Q2H|xT7ZtD&lAv)E_-3ok&bu$UsG0mr8SI|1n@hbp{QYq*XR$6|hROC^_g0gO-(( zL;s;<0=l|EmAHy2i;|JUtc|Ru1qTHK5ldRFQwU(KABqL1?h2BabXOxf-AP4!)g}0< z<&R-KXgN_xOp*qP2|}PKv>Z+rg+t33$tvJvu{bCZC`BCVCw)4FYU}xbX|FCGxboMO zYf;^x`90UVzOE=k(B-T2Rdk}REhRX7ZBgJz@+t?~ekJJ#x*Oe=$s)OeN9`aU zA+DhET;mFUa4k_%fA)oC53XhbA`BsmLi{^n$Zr87SNDwX5i2AAjT7ZHg`YMV$nI+& zw0S{$A@av&_>D8j-~Z+7TQ2^WQ^4W>O!Bw*{fDl9==xg>{4M2ws_P%R{uTp&OZlJb z`mfQ&`^R+(q(P@37IaxEB>EqME?VntbWUghE31F06_tD8+;W@gcu29tBB^vS7ZBt?Ju=H2ShobjLE2?K<>#;*DiM+f}8TmXzz6sMFgUU$Ot&^7;rD-kog%9!77zYfe5A2Cpyx_dP$>>nT z3B%8KUd6s?Jp;?W9hVZ-m{~W`rUm@|Cj3p5QEX;+nb!y>nsLI4ST0*jDBYx{!w0>33mt-+uHjDi9ZfW<=+Uj~)DufZY|0p0EO~`w&(y3-_g!!#x_2cH?3i3{e<(El zdQ!^#oPp@_DwK}#c!=UUj^f;BFT0cunI@JES*w2OOEn?5S_* zx#7v?AX>dh>X5aXP2^TPVV3wd+^_a`MV6?ST8^g5{t1+!|K?MlL-G>K#{50oCVIxb zB6~j&?(6PIy@?^#9hqf~M4E2a@~K)h8cdUrjN7Mn!b>5;Yw}Ri3klKp;+DF_a8E>i+B~gNsPUYSb0w})yhkpxTS?x#^ ztzj2v`*kFuSr3w=gJXb!f3b4VJ-!`2ac1H4PIqboR05d&duCXyzM?<<+~O~9&mIbM7~l2@=!MreT<%^Ts8To6sps+!Fr5A% z6z0nb>xEBC5~Anx-H6l5u>lphy+BeyINJ3b;=PKxG9Vaek&|5Zis*Q^0SojSf=3a- z9UPH>^_kt~Cv|qhVAkgxuq823k-8lIE>iJUI_PEzKrkzwgX7UfPV^HIAWloos~+a> zGBUmX7;r#C4dCYoxVQiqOic}d-v33(zw|{{kk!-_^=v%;rrk?AtFz%wUN_F>_IU+) zKTU&7gVBVk&a)Nn5}h6)8)_&@69et{27R)O4PE;=1FE{5Qj%_&3{Dq?)Lrmwp8P$7 zUZ9(PZ>ieU@9y&3l1^LvLtk$Xg|P;2l`t!>)~K<{iRHR2&k?z;ygVN!+PcRRCp*l| zA2Lm!o`_VwzkpBZQSA2aNc345Q?XjQoi?$|?*C=FleL?lZqbqsP9 zB+Y3O&V!>9OBq$(%gb?%=gO3OE(99!^C#52xtRra4ceI+(1p!0bg#yx122V=R|c@g z5%nu$vq8g4{QT8m!;9v^<cUZusH zWaWc44Eu4y^M)Kq9?5y2$pL!{KA^Rpl6pSRT6&7T`8p^dR&c?{s)W(a3{X>JzbF@| zf4<~*w=E}t(_-$0T<+NN(SU{_Y}jVg{g6%W>40*=b);-7e`eFEt`bO;6cZ_QPEpRn6}*1D36{2xX>9Cq#-)JGXmgq` zvdRw`nPG^-i%X=-EeYgpBD5m~N@XJj?Im0nO9!!ALL>r@ue+!)U1}=o$Rpx1VTo$w z=ep>Do_$B13&op_WvPeU74CK)=}S3M`WlY&kJV>hagR+wi0KCPhrYjuCk%FAP2k1n ztvsh)E>v+U#che={#Rq54Bpfw} z4HSZ<4KCg%H(}0)?`~&d9pg&%yEAmqdnc=-wubzWY{bSem=Cce}aC9!zi` z-YawI)M|Zq#B5w_x<1s%Ff&%O_p+;~KGH~4%|*JeiDf>$OqS8wBCzIFSy-$QH|%A& z;$}S-uBPU^jO)8i8kc9qx_7cV1oejaUU%Kl{Uk&S`S77(=- z1XFl!+2)kRr4(8z%;FJww9KGui&8~M!bTpw(KspInnj@sgE&?0H~5NL$al+b;F#e= zipNP93|hOXZXD>2gQX%(3Kr9{`TIEwLAFyJZ9zl8;X)}U+HPZV-LvA5jWw;|+1`?) zt2LJxv%BLtLC!04%IR_b{P<$bxl=QlQU%kbo)_!ai$(@qHEZ=V8^6}5J4d!~T%W$* z6mH<$@iv6BI5>qSwkxc*kHAb#Vs8`m4-tn0OcS++2c#O8!>NOa~=JOvuaY zn#vY?s&MIq^4w#)Qj7Dt3I^$%`LvlA&B=m?@*$;{jprXsRtFk!*_#jh&2b}^1Y2Li z?tS<)J)@Qi@-5q|>-a4pM8ixh=V8V3WV7d$*I5nntX!TZS6eF|)8}}Pl5=XU^d71K zmqo*6(VK`BmEtPl4R+9&d67Fi{OmS+k5x*QbNNj6IByT*JEG~ifJ-g~E3wehU-M(;Y8Laq)cy z3R{L*m3Lfg3V1O$wsd{|*c941W})fKQt0;cW+tiZw8xd@S#)~z7LBs7RX8JWM_`eEwk*o5W2ta^iGF+7u+QA% zLa)(}wKERATAt%`;l~BKrcS<(+H7J?)6>fjrmv}oN@kA@Jql;cPtWo#^~Agl zne%l_Dt7TJ^i&cOJY2a{vZC7Pz@?O!bN=y5!l!q)cGe_6oT+T=_FdRMN|xu)eRcW6 z5rbr(sba~Hh1~MG%(pL?eUbSk`dwdyS=SG624tLV2-=`q@Or7MBGk_MSEt=#AKHYN zta<&9$Fv-bqnBSd)`{*pr&B`ftoh|$vv*>bfL26amqMX^_f+!2Lgke8TtlRA^TGW| zA|WAG=f`qSX^*pQ)X^xjkjk0lG`WY2l;{m%WoK4W1}{2J2-Q8Z7Fv998FNOvlXpN( z*0VPr;aw4!Zer}&QlBi+*_kAyayGub)7^~bI9g-&aHaY8%bT||ss+XmxrL}s1>QkC zA-ChuF?vsKdU`#tu?%N3rcDBNg{gia*E82tBx{h!#H`++>x>o|VEtzrFtrJp8-nm2t&rTsc-F=V`8u%svi&13I^ z@4maQ^MvqvSy-1QN)~E)vOg#U0Jxa{PI|si{TE5zR}DN;;(Hz8B+Cs{Q?L00s2Bb_ zphMFGh1W#m#yC!(p9tz9f=9Nlov1wT##oS}2>?o_Brhl9ly@6+l!}coE}?umFTvLg z9pL1Nnqqkw(B&dsnIZ;=q;nfqGL|-Gp@lG8fXy$hKlk1Hb$qp{ojt>&WzMpTaL?p; z1j}no{BX!})9OtiYQ%AfQZ(HMkIrv>;gj+`95gDnNVd&RuDB?v=%W(^3w6fIz^Kf6A~v)K~c zK6)Du-9?Cb$AGpI)A476*%{DgXUMl&eIvcdoef;LlOjMg&6B2SOuv$Z^6m z&JA3O9UsTT2}CPbQ|;yUR{*X%I2-BN-h8&ja<0cH_JC*W7=UXnl4=2OPsQFA3G?b1 zN&oavf^5sv7qC)}sh*r5S~IVo4I9`hx8D8;bXP%O+t}BUkni}F?5OavBZ=#p zOfC~Wl9Qk-Oz`lYmTDil>5Z?m21Jw1xa~Lg=x5IpE3My5Ead9F8yfLB?KaYJEdm{u zPFe;KS?L4!>+v&ZvuB-?Y9w!Mp3@P$dlQ-66*nN!$(g!eIhJ@f!mB4g@H4?7a1lIw zgnnGxT~nJ@s0SqYcDWd8yLY5j_kYY&n_;%}Vx{I5B(yqIBBAV+49#U~&oZ5X)!Z_vtw)^rrC2@rV=E+1Kspkz9a>ni&$W3N?BV`_4 zWnM*TW%!KNl$l3^w)^1&PVosU-?Y zsp*+{wo31J?^jaDOtDo8H}y5}EpSfF$n>ZxN)4{^3rViZPPR-@vbR&PsjvbXkegbP zs8ErclUHn2VXFi-*9yo63F|8 zu_jo9udkJ7UU5lcUUI6Zi>(sS2))eA6f0LlXBRhPb8}!II~f|fIy;%0nm8I7n47y8 znwvTs7{K(pdBEge1Web=cS^1@FfcChba4!+ zxb^0`w}wlg%<+$s$CQ{|JZ!5r8Pg-FvZV1&HaG& z0~5=kzw6Fk?F_bw>{%@qpHW!Mek>p*IfmCwrJ&s5_adQUGxdw!Yg0sfBaW{z|5dOC$u+kO4&3}A?E4o=d#Wzp$Py$yy3$D literal 0 HcmV?d00001 diff --git a/static/img/header/icon-suse.png b/static/img/header/icon-suse.png new file mode 100644 index 0000000000000000000000000000000000000000..3d06203a7512fbd917de407a1370f4c2783514be GIT binary patch literal 14008 zcmeHsdmz*M|Nl0(P$`#EF3l*{u`$ddcSgCBd(18@v#o9JVoMT{OC`yr3yDxllEQRR zI*~&~2|E>CC?rL>{N5v-I-m3Te9!0d{r>*>%&_gA zU3n9_{Oc>}%@&fS#s@7o9EwO(NH{52>3(^ysDZB1Z&UI8zdeFhm>)fiPsv*DQ-i$> zbF(!$X6K|B_{vUVV?BIRbHqn{L_u=9+wxtPcU~17Y)cO}NR5ea6q>dSQAm;b`uc7n zv@cNN?5wZ!R>y9g3X!(EyOV5&BS`Z)1MPW~n6E)FtCfhmvK{@Prxs>m`>M^}!A^Qr zVq3V&E^&<8$j}*(O^SDivV#j8P%w#XY3XcdX}P2iphf=fEEDWaGx^&AZY8c7Vi9W3 zGf!f?SIdMy)Q~=DaaP7B9C7udWk_dcO&waSN=U?X&AFA@?e&;Cu{}JI{pSuGI(6Xi zR8w&WtR?Y9{Pk;}>taQgO-goE9CR`3(j;lDUaR*+204}rg}uJL>+Q&H>Q#yrGHe9W$n`ybq zyTH}ENy6yc%5PdD9#&C<)XZByF1XA65$oPsUF~!ESJ0s)8lnA%nh=M(jvlRZWP(Js z0#{2~shtfH*yz6Mw1?_#i_}jq6zAga=wxkG>K2wY8gkhu{A4_FzIl{dIWv`eTsl>t zK&N&VxlAyEW7AgKlo57q7i6l4quDU1)6IF~)ENqg#|VC;2(0afJ3JB4j%43Gq;&kE z@-$g&+ITG9X5g^U7b=HW-s{WUP%2@TS8v}B5=rJdb1rrp_xC;O?`;aso1F!{9cFaj z=g3rmKyn#mAi%sFwqfwJFl`)x7DUvJ3ZwJm5M*o`MaSVoi7aRkF_=s>fsI~kfK(ni3oqsS4uuq|RxV+MhQakaAf1_4}|z(QCoIz~q) zGBQ#-Qcs)42-ZOw85!vybaixf;Q$Am8BJy3qTp1fG9O|Q!-~knGstunnMQ^3F>yh( zaFz)S2GF5P@`ceI9KOR-ncu7c*h41@N7q4WBXq*TbbegHWLZZ5Am0S~(-lm2;GF8X z5}CAc2A*ggL8P*je}-@+{;tzsfB@Kk0sw66?i~D&g<$E>gMShnSY-Q;m;B7nWLc5J z{_%oEMFBPB9FCE&j=riU{q3r-{8b%+!qz|LX75?7+T%zQa;zEMNm%4h|SA z8a|w_mz~v8#1m+EG6C~l7>quNL^32G^x*m=14Fnz5oHKR8zJ#+IE9DrKj( z#OUc_P`a83T?}GT$?x1WhC7W$F@f>rhVsQ-*o04Z0J10n$HFba>FQyS2#hYu9btq) z8DjJhaD?HHaDYN&QuP1C&Ci)oV*norWh|M=qA{WuZY?HKH{!0vtHnzSdEuBtp$oYX zgTpT>jHgk72w@Qk3$g-ui)0`ca6sw@Y;OW1F=$~>9Gy-f;}@)?L#HuWi%A;FqCpp9 zfJkSOU^;(cLnbcbkDK8`aMWNTP^G>lj3pt+e{!01^$-XG-hcon8W7NM1ObJF2cbzw zxIWqtfz%7qLm~f&&ZLo8kw9ph2Lo<4ftm9wCe(Q;8$p52hVNdenkSh`phYs(e>Oy9 z2$3Iq2wlKMdT^wkJJJAyKw=R3n*6_rzohbi%EtV2_E$_yATr2*W8@z_`duLCf^UEK z;P3wYt9$*M?!@a7^hvrwD7YbzNa1*0lo31#7es=i40H`pM7$9m`7IZGkIeu7?)*1Z z*2NQy(1ai(;K&n#;3&Ki5gw$EN5j!X0+C-GP6%|n!edPb0ibymXNiftif|GD40#MZnc%U)R!^4gA4DdmCf*~mg{fCtL?|Tu2 z!07!SdJ!3`6fu-66pj-Uwb#f8o->bEY7_@)6bzQUrUrB!V z>K2J(;X`a0v~W7}hla)i7yNU7@$Kqw=EiT04lpg!Z85=o7)PL)F@XWSCY69=5a2dA zMi7ZeAp|qR>GZ{x7;x|p{QOza1cq3cO8FGNWVV=vUK0n5K}#Di?)}!C7&OX%RxLIPvb-=TUFDHK}d-yvF1 zfQM2)(-yCm1o&>HKZt1zjG8Pmi$eTnTK?HGr?8gnzIfF54}~xkV1fWfZa9HR!i7^< zuq{@M@IT*kX}z(|kKl{){OsGsd%!a-;1QY5_h)3^HV2~lfAh7J!u~h+fc|%rf6Bc7 zHP?U5^-o#gpJM)Rbp6*{|C9y(Ddzu1*MH1hV!waOMx+9*cqH&0nlN*11MvNuaL`s; zE6_au=R`y4A>fWE9qY{mfmhS{zhIKGuLp2bgk|SoE%HiCc9o9eNQ0R^2qbG~XJzgl z_4Hjqbe;c;N8E&0E01YkEps-91z}+pwbfxE+cGL2x^MAlTz}4e3l>MgQ`t zAgxE+PY*rp2s^9va^jc)ENEE@cD4DwSiQdA;u0!8GyEfaZeDv(7yp2%_`cOBc68!> zWMxDDWEEK2+T~w;s|y<`?D^0vdi_D)`Hsm6j7CaJwPJ|4f0Mrq+x2y8w}h93ENO=_ zd&T`&OO8z-d}1d^7(B~~H>->u^;)R~-U3+z7H=CCnE3p>W3soL zKJrzd8zL7qPxavggLkd`MvN>Y`+biUv71Zn%a! zwm5?n?hD7SG+tFdx6=dkNcp1xrjFi@lYPK`!hSOKa~2+;!Hpk{0{wZQ zOO0|VBbAy-${8a!2PFI`hnqh@o_#Il6lf_j^v4VXbrDVS0+$#bDpRwWyQf;YTezSj zH?ymk=dpX!bfYvilaEV9IYFfK&@AyW=$IaFk|B2So(G3#2<;7D3pxI}tFf3Yumv?V zS;WquP;VM>KFcE1Zw}MIg(8kZg%bWv)-q<}_h)Q)(oDU5Z=W`fc!?Sd%-Cl`@927P?D5>H4S0tP`i2 zlB$~QWz1^#B#6=x+1S(WibbF<_9afn3rt@=_nO4`ymX$i z+<1M|k=F-~J7-nZAi6BGATj#3tKd~#=^1&t^jK(M-92{Dwh<$)H5Yzz_{t|lBvoi7 zygC$6+$z7|fwa%+m-n_!MS{-2QjAhN70xR;ULpG*6$TLZpp&m3}hH!@Qd8CmuWeEp?hztT?d{;>^76maEvrmr1O zITsRja`=1%WOUP|IIXpc0*xE$=j1#}LHS%c&O_SQeeL*-8y~>R`qn~I0tRo1@hUnd zOWo5UbjWBF!6`r>TH5z|#zWJ5uQS~Vm#T%gdsYA>nh6i1;^yR1&WT&@9Vgv0K!Und z?cJNOf%gDgUUka7{6f<*V~Fv}>w6wzL1os}8^^)}J&UR4k&a(f!G^3^8JUxyZ7v5} zvF@Jf?W;>5WPvjRhb+4jlur4j*;o`xSl+uEIkW{YT66R=e>de~i~HTIgK9GO{CK$? zlff#7v&LI=-`AH!?|S_u_;4kkXKgW^a2mj}xVizY5~g6)75m($cso(#9Z7wR?yU}T zBYlmG!FOZe2H->`IoGak58nGr< zq5J~k6`Y&>>@7!HrI26eBZtBs|qCSk5qH_%shlDWc6EJ3@QbW=AJ_L zX%uKDjINDb=ZtrmlFQU6y~)y6^Vpw0a=jJXoZa4Pn$Dec(t;FW37N+~8! z8NOu^`yel}HkBOB*!B(NEj01R(z$|_A=D%MZTVyne)avC06A$ z+x1CHPy0yh>F-FVL8ZjG0*$9W`?+ixlvW-4QgQ*}1EgOX{%MhLK8UiuK0b0<$aMJj z(F-mQnVZA}84#NE7_(s@^@Gtq<=&|7Ge+3gUmk!Se1s)PaZF;TUgdkQzn+@{L9E>8 zi@o{|m$i{Ahi&`Zoi%p%bEd(4hqyI5{W~lLV%T|IM0_gx?4-S4I48qFeLNo(HhK+6GNF^Z~HPGtv0VK$?Tf0xJ6K- zACijt+PiGU`+edZ(cN*Rt*T=T7tD%RL1Ooy#rMc3PtW?WU$NylO?#^b@&{YfEM$e` zCFGZ-S#QnJ9IsnHb#gF|ZE9oxcJlUdZm!y^gXchVcS25B{3`fNAYZP2?pe??>(k3y zK(9bwPh3;9XHhMTrwqy=Me8g-tpO`rbO+!az`48cL`!e_HF#sE4lnrRaDtacWY^2h z`QRB)TeS13aTwj}^_TM-^O-j@W!VDwxG>QRE{AtE?~>+5b9M9-D~!ja$ND-ZUmba? z5DuY%E(XW0&znAL<$Gg?_>8dEDq3Mmv7EMPkqw!aJ;v#WAbRO-+uqh%3ssn9W}Cz$9D5{_Ular z^1cW<4aKPV(rk+Z0?$p;`sJ_mtL4;Ob5gkT_Rtn5!Y74SB-GVj_BnRfX|$I8v-Mu$ z>O*uyJ>HpI1AbxIUDWj^KEuK%A^pt9+5^6s%Wg!T4oPj4-D`<(q z)90!*dR1Dh)BJclE=fJy)#RB}l~7d;dH-Ua%bhRdKD3va z)86#l5xc>#gdr<%p8*-AACeq+^5u9U>&=tS7m`6HhVa1&R5lUKd4gsC#JhY>!lwV)oNAA=~l;+lzf3S@0}*czl?);rUI)H=S+c zLD;%Evlkm=(>^|;5`c0)XyC$lVP_;`0gzbE&}wM-?sIJsbHNzPXpF) zY!`2T;q^x3QSB~HgU2Rm@e1=BZ%S{?2085lh5GuG)^?Q?MvI+Xm-bQdS;Kw-6VWFE z{mO0TX*w^LO-MT*M%@>Q*b19zP-hE|sMK7Ste^8;v$ttwU3%{B2f9JaR-rkO+w3=v zA;$s^J5HT--BF{w@k2{;m8sqv^BBnw<~h>bLpkvy(}%Z@Hv8~|RmU71qw`fWms?n; z+|&vL#*Pe!VfPWcyJ2B&CPBZwXQsgv`Hq*Yl38Ik;+#ziQDc$)!}QpJ<&QU5 zb_Xbq8w;>$u<*IsjH@$cU}_|UW|4a0q%B9Jx+uEru+A@;J)Bmqn|Ddu=b9CD2fT~U z;DFiXc&R&RS9aEiyN{o)w!?pMc;2B>oIi25PJ+oh2efmJQSZaE_O~`;KA+1O#5&BE zVN|G8yHYc0*p}S&w}U?)4W3 zpeloA)}f7CJRj_W(0n@AkJ&`a%e;ux{PorICblNW^ZpFgEar8(ZJZfCYk5O6_NBkc zr=6Q?m76hb*pWLE*xxdfS2ekBt$vIDSPb&fnthELkscR*$=UFvN>jEz{+v|Q9hjX> zzyW08mQGV-tC8pNIq~kY`#vu=-fFsZ;Xvl@{ZWrbhihieS$~+6%S7~6uCg7MEmou2 z-lKNEZJ5@!Xk7l}N%SM*zF=iLi#Beat43n?A;^WScCVrjU-7Vbs*)@HCdLLTVNt)* z_FW1+6X>q;);zGQ6MZwgWjoPOpUEybJ#s{Duw&nv z`laN#M~)b@QcrWIL}T||M4mR#{W>TCjb(JHVl{;iS zTdGv0VoQXi;{C-95n}V|?AnF{QOy_;BhD3E>O|kcTRLj3-a(2h?IA)JVEaIyZaKRZ zYO0TVZCO^%|J~k6o;#t^!UnDR;KpY#;8u3b+)U);O7V`X-ErujZbU_de|MEpN4=jWiuyIv@=m#^9&TMsvwb=e@rTz^lW&Vvbmyt zS)^RiU-wL)E|u*E9<;%!64y3UbDq*mWsObj*bw|vjs z!Q`-h&7oUBraROzIck-CexKJ;M;T}fjA)VZTjt>bQ_bfwRl;?* zM6WyO%Ac>>@jl!E+)?9qaQ0n$Rtah~kRkT6cnwcGl|;aWsZka}fsczKeb9Hd14ncl z61hV}J9TQD!!22t7k~NI)>UM#MR*C&D%pHYzJ}tdi&9l%PB`+WGcsCnE^BXA86zgbuu}zZi1x(-CjOJ z&xQs2g*D!`N^>bAR&MWp6YTGimO7=dOWNA*f;TpzSxrhx?IXk!{Q5A=#~%I z#J$R$53!?(@z_8&Kgy>)UqvL-Lk^wG=W0GGF_Y>vyKPgMe6mGy%DQZNOR8Ow6QQds zuCe6uh+lD*`CDRAG)%%01~wJ$zo#qT!}_!$<;w2IVijSR{A;A5lA#W+l+Ha-_s_SU ziY`eS7*W1vUK_Ony)xJP-8AItenTE)^uQ5yw+o)f!bZ zcT;<{Y{Xpq!S@aO4%p*nzA8ToxA8WnD3%Go1Yg=(X78k~!Jf=~nQFeK{fXEJKHq2X z6v)85{Fhtl{s|jdv*YXh9-xb>{I*wreAl9}EbEh-1ZebWmEDF3!wm;V?1|&P^AhAO z6E*KT8N_!2uRjoSNpJGQxu*Ht-nwl-*(47fvFhn>f1YR2U9@5*cO+u-&ikXYsntFG zpdCPJ4wyh`3wPa@)IBip#iQ8p<}W}zq1F^>upOneX^#VIA17O;S&|BLWltyCy}ur3 zaC5l+;EhR{3IQ8s+>?`1QODPQ*(579`!tP+Lv=#toUGkh8_8gn7ueI+_;ZVmC@SYI zFSjR2&Q8(mqg4xJmn7XVN%KaQW$KZAj=Xn%dwm&l5fQ~Pr`MDXu|oB|Ao|Eneu`)b zas%3#NGo&B{^d6{@UYttyRp~0PxAENm za?q9pfjssGug5oDWJ1#&?6Fg}kog1d7wqmGN}{VB%TIU9lubICWinymB^a937`RQ+ zQFrUPoaG6@cXlQCyivb>Rq%#OVpP)Si*fL+>`};P8Rmfws{?(OU)(!Kfo9o~bG~7X z*9KWmjfNb#@bRoxQMtksrMf4iG|RLB?~GgQiVUmibluPAl$!I~E* zE46E@2RLqAxFV5qpfY$m=IMRR=w4ugkiogmN~UBuX(BYUX6K1&_9f~dWrVgQHp@Ee zjx{$4dtfvtO~^F)rR+@B>%RS8o}U5QmtWtMxK}_BH<(Cz%)ag`AUpBd#CN4--uL2e z8c&9muA<4+$Di%mTE-G`77-*Ac{a+cyM77g^%pCZ+q}LaaM0#Pj4hE~*=5;nYF3)# zIxfat_Gr~D&1>GAs^}Gx$3Cs^+|099@qUBf5=!2lf!_OA#6Tp%DnhR1*yoEd2}-F` zLdi~PwhP1;+Dp3K|6nv^TbA;*a-CeD?eBc3D-PpImyE~&Fp)}0Yo3-<_!q?~`#P`Oq5QMTtS zBI4X5HT&kfTJ29nuQz1Lzi$y!Y&y!eJEEvveini)7i@v;n)G$&^|V1EfYhv9V3Vp1 zt`AswA6#o){$#uB`0Zm{ZH<{!4R+%ZGd;TOn1V`6Q+}&Jf>4C#rGQ^In$2u9S9Zm} zUo}#Cmel6TQ{|>V+F|eGB9v((`ZCRlXQN_urRvt5vn!&+&?wfd1nZt!IcP*}&-%d7 z;&xu`#*o!7vu>V?)N){V zmDa@_%A~@3Vd+@0)!juRc^uo!FQ7{vV=^9~+aAK{SkbpH*tROobDHi^~Cn#`i=$AYBTD>7FaD5NTNr=Pf@Bttj~D+mvU?tV2Oyk_m` zvn8j_LN2`6d{#g>*VA;O{0(gCTqbHz|Mo~&saoo(KY0uB=D}Ylu2grLuP+9^X8_q* LJ6cs)1n&8NzaW|y literal 0 HcmV?d00001 From f2dcce0b14ebc68a8c7c3789c35afb9d122c8664 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Thu, 2 Nov 2023 09:55:00 -0700 Subject: [PATCH 13/65] Add K8s Distro landing page content --- .../kubernetes-distributions.md | 30 +++++++++++++++++-- 1 file changed, 27 insertions(+), 3 deletions(-) diff --git a/versioned_docs/version-2.8/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md b/versioned_docs/version-2.8/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md index 7691015e2c90..b8e6a48e3252 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md @@ -2,6 +2,30 @@ title: Kubernetes Distributions --- -## Kubernetes Distributions with Rancher - -## Kubernetes Distributions with Rancher Prime +## K3s + +K3s is a lightweight, fully compliant Kubernetes distribution designed for a range of use cases, including edge computing, IoT, CI/CD, development and embedding Kubernetes into applications. It simplifies Kubernetes management by packaging the system as a single binary, using sqlite3 as the default storage, and offering a user-friendly launcher. K3s includes essential features like local storage and load balancing, Helm chart controller and the Traefik CNI. It minimizes external dependencies and provides a streamlined Kubernetes experience. K3s was donated to the CNCF as a Sandbox Project in June 2020. + +### K3s with Rancher + +- Rancher allows easy provision of K3s across a range of platforms including Amazon EC2, DigitalOcean, Azure, vSphere, or existing servers. +- Standard Rancher management of Kubernetes clusters including all outlined [cluster management capabilities](../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md#cluster-management-capabilities-by-cluster-type). + + +## RKE2 + +RKE2 is a compliant Kubernetes distribution developed by Rancher. It is specifically designed for security and compliance within the U.S. Federal Government sector. + +Primary characteristics of RKE2 include: + +1. **Security and Compliance Focus**: RKE2 places a strong emphasis on security and compliance, operating under a "secure by default" framework, making it suitable for government services and highly regulated industries like finance and healthcare. +1. **CIS Kubernetes Benchmark Conformance**: RKE2 comes pre-configured to meet the CIS Kubernetes Hardening Benchmark (currently supporting v1.23 and v1.7), with minimal manual intervention required. +1. **FIPS 140-2 Compliance**: RKE2 complies with the FIPS 140-2 standard using FIPS-validated crypto modules for its components. +1. **Embedded etcd**: RKE2 defaults to using an embedded etcd as its data store. This aligns it more closely with standard Kubernetes practices, allowing better integration with other Kubernetes tools and reducing the risk of misconfiguration. +1. **Alignment with Upstream Kubernetes**: RKE2 aims to stay closely aligned with upstream Kubernetes, reducing the risk of non-conformance that may occur when using distributions that deviate from standard Kubernetes practices. +1. **Multiple CNI Support**: RKE2 offers support for multiple Container Network Interface (CNI) plugins, including Cilium, Calico, and Multus. This is essential for use cases such as telco distribution centers and factories with various production facilities. + +## RKE2 with Rancher + +- Rancher allows easy provision of RKE2 across a range of platforms including Amazon EC2, DigitalOcean, Azure, vSphere, or existing servers. +- Standard Rancher management of Kubernetes clusters including all outlined [cluster management capabilities](../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md#cluster-management-capabilities-by-cluster-type). From faac2fee68237509399102b33c11e87857d0bcec Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 2 Nov 2023 17:52:16 -0400 Subject: [PATCH 14/65] added css + elemental icon --- docusaurus.config.js | 6 +++ src/css/custom.css | 72 ++++++++++++++++++++++++++- static/img/header/icon-elemental.png | Bin 0 -> 72382 bytes static/img/header/icon-rancher.png | Bin 1329 -> 12024 bytes static/img/header/icon-rancher.png~ | Bin 0 -> 1366 bytes 5 files changed, 77 insertions(+), 1 deletion(-) create mode 100644 static/img/header/icon-elemental.png create mode 100644 static/img/header/icon-rancher.png~ diff --git a/docusaurus.config.js b/docusaurus.config.js index a00bfc10b629..6db813113de7 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -124,6 +124,7 @@ module.exports = { { href: 'https://www.rancher.com', label: 'Rancher', + className: 'navbar__icon navbar__rancher' }, { type: 'html', @@ -132,18 +133,22 @@ module.exports = { { href: 'https://elemental.docs.rancher.com/', label: 'Elemental', + className: 'navbar__icon navbar__elemental' }, { href: 'https://epinio.io/', label: 'Epinio', + className: 'navbar__icon navbar__epinio' }, { href: 'https://fleet.rancher.io/', label: 'Fleet', + className: 'navbar__icon navbar__fleet' }, { href: 'https://harvesterhci.io', label: 'Harvester', + className: 'navbar__icon navbar__harvester' }, { type: 'html', @@ -152,6 +157,7 @@ module.exports = { { href: 'https://opensource.suse.com', label: 'More Projects...', + className: 'navbar__icon navbar__suse' }, ] } diff --git a/src/css/custom.css b/src/css/custom.css index cf2e5deae3a1..304d013fe7fc 100644 --- a/src/css/custom.css +++ b/src/css/custom.css @@ -4,7 +4,8 @@ * bundles Infima by default. Infima is a CSS framework designed to * work well for content-centric websites. */ -/* Import fonts. */ + + /* Import fonts. */ /* poppins */ @font-face { @@ -210,6 +211,75 @@ a.btn.navbar__github::before { padding: 0 var(--ifm-pre-padding); } +/* Navbar "More from SUSE" items. Thanks to Nunix. https://github.com/rancher/elemental-docs/pull/235 */ + +.navbar__icon { + font-family: poppins,sans-serif; + font-size: 16px; +} + +.navbar__icon:before { + content: ""; + display: inline-flex; + height: 20px; + width: 20px; + margin-right: 4px; + background-color: var(--ifm-navbar-link-color); +} + +.navbar__rancher:before { + mask: url(/static/img/header/icon-rancher.png) no-repeat 100% 100%; + mask-size: cover; + width: 35px; + padding-bottom: 7px; + background-color: #2e68e9; +} + +.navbar__elemental:before { + mask: url(/static/img/header/icon-elemental.png) no-repeat 100% 100%; + mask-size: cover; + width: 35px; + height: 22px; + padding-bottom: 7px; + background-color: #7100d4; +} + +.navbar__epinio:before { + mask: url(/static/img/header/icon-epinio.png) no-repeat 100% 100%; + mask-size: cover; + width: 35px; + height: 22px; + padding-bottom: 7px; + background-color: #004d93; +} + +.navbar__fleet:before { + mask: url(/static/img/header/icon-fleet.png) no-repeat 100% 100%; + mask-size: cover; + width: 35px; + height: 22px; + padding-bottom: 7px; + background-color: #00b056; +} + +.navbar__harvester:before { + mask: url(/static/img/header/icon-harvester.png) no-repeat 100% 100%; + mask-size: cover; + width: 35px; + height: 22px; + padding-bottom: 7px; + background-color: #00a580; +} + +.navbar__suse:before { + mask: url(/static/img/header/icon-suse.png) no-repeat 100% 100%; + mask-size: cover; + width: 35px; + height: 13px; + padding-bottom: 7px; + background-color: #30ba78; +} + /* These styles are authored by bravemaster619 https://dev.to/bravemaster619/simplest-way-to-embed-a-youtube-video-in-your-react-app-3bk2 */ .video-responsive { diff --git a/static/img/header/icon-elemental.png b/static/img/header/icon-elemental.png new file mode 100644 index 0000000000000000000000000000000000000000..666d0f4c41c0cbd4a04bae739c0a4be5e283c3d2 GIT binary patch literal 72382 zcmXtfWmp_d*Yx7TpH*&lspAs2~ssLt08)83cmY0fFG+kr6>47;UHQe&C;-rI?ta zrLi#xL?7=GKg|elNHikU5q4sQ^@b*tmlEkq(q54o#6h!yiKWBXOye}l((I#c_4gqW z-mpl0q!$>C{V(q&zBoGF{icpS+*nza>zw1CEkn zZ*zZ2)gu(dKXWn_-exQ|hxJjZ2-o`sK{&Pp72xk7k+#u`KqXxYa!MdxL9qR!Z$8%5pOp z9+yd~XFXTf6Zw64UYwX13f(V@7}5L}U8j%wpSxeaqPxU87-~FKB+QM59<8ZW{RSD4 zI=E^reCTlKMmfcjUzJNO=36nG3tCdKkQVuk!O4oKr8dRe7P{B|ufTeFEtI^CLgxV{ z0nb9toBm%e@JLV`q_mtsAeOhUe=zRFLarbXB}iIaMAd!uBpb;cUyU@_d``@h?h_47 zubnV9Jbp5QxOl`Yfdr0{QYHScE)-Z)hKK?`0~qlyio)|uJs5EeD;PmEaqXlCk!2vz*N1qN)twL!O0jeR~FO7+XC zSXc&WJXDsdp*G+mh6fC`k)hF`u^p`FP-+P~@y5A!N7*$=iPrn2M9{r?Vq^ooCj0EATpvG^l($DRbG6X+)X~h zF7H|up-O#}=oeN~q@mJ^V-u$tzjiVVjbrOIF#U6G^${Hf&O{%b$}np1oQ0nT4R@GL z@3D%EYu7uiZ;bk8P_&6e0vjK3EFg%AL8~asMv90Y4fnsVELuew-cm%LVwwI(Vi)DG z*>v||v0i`pt2r^sA%-;WM*b@*2uEl3LaUJ7Rei|G3ywgFDE{4)PN}Gi(HBB|X==I) za<61|6a%~ZtksNMmd{s=ymQu4?4}c9|7?xz62D)b7IEs+M-AedkABJJf8Ly0Y1P@h z8+N&pyMbk;K}SJP+tsSzW_f8tkz5Fez&>iJ;ZGR%iWkYH$t5L8Fy+K3e5M zgyANz5LT3+DV4Uy^JfJ^HMcm0&C_0abfM0zC(UmDE?TK@$dWAPts%Uy{v(vCnBFR^ zAvZZA(hQac-LO0OR6O$yK@px#!DxBPJPBRP#i33bjRi}e6x*=6vIe>sd4Dx{c2ZE^ z1)-|BQOTmU{8Lks1$~01d{*FG%^Um3>uw&X>4&=@VLQ5`X@z^l}wK;{yqBIg3MQF;_zY}BI}KM zx;q9ZZb|b(OL@g&<%#=b^QqV5~ zCc|N$gS%Ryg-TlAAr;&3KMExH(r%7XlYz8vSlJmzPahiDrA%8fBLv3^Fj`~4KMg1L z-rjIEG+%g}-noY0duYe6pApiaQ`wR_>KCJ;z!k-_4Re}Jx`cDJ@mWO}IaGbf7042Q zYse#;Ro_pN81u=Yn}=>}gp4tyzc!i2tJJLa_7+3W?19GPBvl;Q*XDbNuFJgWr;Epu z)-e{tupBdLXO)Me*e85nAL^R6<@!q6I;|{&Hmq=tT~Q**8JrBZ02{Z%rJDq4k7j|g?*Cu9YJlirHab-IbQsvnoe}a2pQQpe-+nDY0*!3+YFg@>OuuYOk&{0$n z+hiH^H6e?T=a{Eq;D*??=9sac=u-5N{5lv8HU7*!d-QQY<;MbMD}BbIn?hbZ+Z(o9 z6{sxqt7C;w#fZ!&9mL@6fKArcvOx#3J6Y!;A8DpXqA@J2IOVD_SS;i`sRRpL6XOeOJ9&wW&ZhJ#ZWWzjECP=^ih8M4cI<;FH8!- z#7m^nLEF;~w~~g`eL8IG%>JIh0 z3{hWdf2Hg(%Z3G63J*>4(=7^JWOVfP^pp=I=MSs5C~@NG(QG_}KHFTI7NgqOmRkJD zdRdDbx7_jmNV}1@>qh*GY1JgfggJagzNV~4U$@+b{m*TWNA?`j_2B_2;@9)|;!T}= zHnZ|5b_`o)+fwsUXpmldN7Z*%Ox%!{RsZizv)RCJE{9t_ZnoN8xRIXGRlN|840H9^RwSyD#s-vFMpf*Gb=8>t z@byljD7sq` z#83G#?s59kBt(%Rj)twvPhw8zw_(bmvp|dkC{WIx6zldolv3BlQdI`^#4z?7Xh0we zg7Jn;UYF$k#OVu2ScTVcW)W-sOz&H=e)K3r2T-RLT_OPI zx%5Erl_I2xFck2ml=3$bDaa!dEdZJyjBmy2!wz9SqsG%uw8j+5)T5#QEk_)qCU2Yw z#5}T_wk}}H{R}qeEhM6jvn&;pTa_H17mshiN{BMwc08w&?N0pG$3KoR{siJzu&=q#k)x$2athgVu6cHEH*JzkrC^znA>>ZQC`68bM>kc>&It zD82pKF}n8P5pV~zi1leAi9>FI>`<=^ra>zol_E60cX7d-bO|8lcg|nqN!6Bcyu}-# zL6=1+yP{~|;1xpRaPEuSYwsG>&RP_qLR=85uT}ha;(NvS^!L$r@{*>7L5CwD3N?+!ZpHbJCF6z(@;{l@(kg%K736=MO@u1U##Pu}hy zu0)*DyPe#p^kT{8n-zbi(o4s=u@vSt)%9im9Ms zPf3kIWQ=2~A~J;Q=7?f8_Tp!OS!57Q5I?E-W;wJ=)-O5UTlsVR&yr!|K&v)BVq(KY)*L07VI)rt8 z9ALtpf2*jZTqsEuZekh#bMuAe2ge#acg8qLa2W$tWBW3XCZ*OfQRl=XSZobgoiqs_ z6EDvU0!_-^?of?zRS_s9-S0|HkW?}kwVUXc3)gf0q=WO8iy|)X{~mboB^$t>cGkbP z80{7*wf^?jM3QBO?wT-2&#$FV_y3IWBL??tG7GlS$Lti!D3aomRC|Y16$DcV~IZTzjzV6uQ zkZHC&o{YN_cdrD|477}5V4KS{Ve!4 zXE0Lpx0076fs>L6S$A72Z85D-?7&97Zd8LG4jkPpnE4|(7_NGp@dv7QpYD=$tMmA<7`lJ2R zw$a6T_Lka@ot|kK0HKM@!j+7O z_OAazD41ZDBF3cq{s*i-TkD&y8=Ibc0KUzWtCWW;S1|QdPA!sWB$pML#OT@nyP^0s z{BP=kcHa-0Lt7jAi5do-F{DU)0q$gx{C8x&=?Yn0 z4oWK=0I&EfrZ96-7=jSjrxJdX;k(huh_%7wm3h}%oT7ok9YgzUVUz2+I{dsts&Dfj zdN!_Uo>LNvQz#D2XM}1-CGlxvHXA-9zX=vmGZ!-;!w%FE3O-3F14`R5$shoSp@Mqf#Mk-Nf?H@1NS#U+mN z?`1gn^Pu1zSW1RjCwb5gEDb#a2JuU=P;*I0D`-|TXHALD)$NCI0Aurq&{mN0h0^L` zdouS~N>*p9tOC}RN{0R?^o6(giX9WQ?e{^JAHzODDxXefXo@wfL(!Bh|Gq5VqFUX^ z6$e!8!a?_W+Y1ni<_tb3KT{!J)?d75a8XUE_wus-p7gZYJSXusaS&xtqNv)edxZAE zOpojLAyNHoC<)t0BA(&)nzZH$1Lt=t`J}T@Y8tF^iC+8>2WM%*ug=hBi;eFQwT#Yo zM!^P4{{INfa|%ULL5R$k>hmsGzbjzlho7GTt`50@ElUHh5gAfdMIb3MQwQiJt}lN> zwIJ=5&7_boV5ML@cpBmuVP@+YzjZNuj{ocDv>gx4ecTwhW!Har-@3WLCe+t;V*S!R zPH!$BwBM1+VZp7p^U{?&-j3bd7lzi@*4urxqy9G{l)GH&@&3YuwqEWpB-9B8F&q_) zfDIl|#c)z?L)6~Q&j+5_fx6ux0Q!E9=%>B$KgJo95UgZ4=0vdDmw$zWmor9-8V$Q0 zsC@f}*s|X`F&f)0G?9XJEZGHR!5t|1PuHgM+ldt`j zIA5{}nMgvVUs3Zr=m4UU#+7x71I`-u0@t_Ys@a;f7MI(=l|) zmh7q32xZChzgVxDiUb2cDg+fUW{GB^QQZDeL9Wofdvd*N$p@s9>;TU3sYab^_YzlW z8Rq&h*&hd@Qjh+ST%0kLcYFWz4F3vIc?$|?aZ0bcRC4Gk8erEGq;4i|@CYI4s8QP{ zxvOh?nQ2g-uf@bMus!f#gtm^x)#ctVu&VrU$wJd89pkzKP2eIKg|9H#KJJRDX?$?m zCxM-lT?{-{WH$P;YY#_K72ZG09(Yz~zQ6kXk4AwLda1NpZeOEu<#M)A{ss~9ftd{@qG39q&lE-x z4ZcG}YVc6R{d!+&)DYY780@^y3}yGi&h5@yDS26DXt@)0-TCkYSr)9*-~E280q>Sj zA|J9RW}rNsB#GeMVt$*g&XuJx#0_DF^S8n&7Z0GizA|_#sr>yt_YTy=P`yuKygqyX zIwzlzq`yD>Z^5Ab#9Nsz-96W7Y;bQnrIdsL8v@Ku01bD%-7bU&5B8wd|I^7I2P&4l zrc@yyj8?-NqzT*6I`+4~Wbs9{JY%W}w6A9#ndiEcd6RB}c7+33(580I{;8IZX zpf>IZ)-0qfBGyWGt!v_S`0i+;ZdOM0Z3cdng&O1Tc-&>TJo1)WkQ>>o0|s9PK>LTT4m&oALYr` z3JHOwogv90kf`3p8Kcm$9xslT2TZrbXNB|>g^GVW@dg{KUyAAVY(x>v@_2i)JSFua zUBw_#yL_2QI$jz93Dc=?EK8*6L=ZzmYH`gYI7SmmTkVX<#X>s> z5~7CatRV&&oDTjJJg7mi_i&Qtb^hGQh_j@vr+U>0$xW=RkYIm|ERg!zwx`D+*ffwE zwpqkGRmZ>g5M{#5q8}yo&Q6#t2;q-kD|SDCSBSe5=K!^7>Zb&5YHt|yayOxuM~n4@ z+3P=KP#7Ap`EjT!6vV11G%9t)mPnB+q}d@>WI-=g7337pZ=}E07~Mg8mEV_WMZCa9 zN6?T6qXB=%?JfGW9jx`s(U%-M^+hrAra$G-RorD%xOIK(XY~vVafr{|a=ndyn-E&q zxbGS5>Znx&9mUekJmw3!09j#0To$rys?wAf#{K3DtphWDg#}m_ z`!}!7_5?FYI&0Gl`W6)E?0N-(dpIQvvwAGLo$ri}sS@N02vvw~vGx?11yQjqGdt1^ z`c;znlq{9B-~UzMh&LPB9Q(@M`<#n+qBG?`=1T?A3qv%)X-IlbU>W{ECDByxdi-K$ z85`n^b+D3OYb^`gTW-e|v#L7Qp7+){^QWNj(&%ICK?%RTvQDi+qd6{zurcc#DOnRu z-Q(Vwj$cZoP@&X3K6og5tQKY+yS_lq`cC9W-iXXL=912fgD~BJr%DtSY~bjh+X#9Z z84r&Nh_`#X-ZRN!*{W~4ncktu8n}5iOzYe^nVI|Bc%_~BE_}r-7aPt8 zfUL*|e_rS|UyTDP!Iq1IQ@H>Hg4NKl3kXC5H@^6j{W_5K->1;r-?%cV1V&ySwu z(?Y1lj=wxG=SMTl<~+6gU0pHRgF{^jRU?Dy0=*ZNp1OU2U16{~yzEA3}#a zjNCgZS>E(9Fq?!+i$(31tXpgN8Sz;_D(q}vj8ie@!j`D4QH z^=u7x)2?z2j|Cfy=jRmfgVL{+EFG~x0f19@HDOUPym~csGICk z;i3*oNUA2yD^5CRa;&9T1a)$ooglZn(`qsY?87s&8#L23%x>U6QGCpFN4~k=2wuDG zNymYtDFyp|$$HdtER^)!DfMdwpmlNe((KM{4^U)s3NDUs6*d5Y%urNtNvegf?_$TAI62rP&ogYs`gOf1;MAYrlpf%ru`bi!1=1D=uDeF;r^BoGfl z%1|-YR!@e=%d--B0VW7OSQly;@rzKq84CsU)1SbD>~W<-%W9b~CqRal=Y9yo^KQ=> z)dH8=1g3U4|}kDxt5BhJY?4Pk`!{&ghVch8)6 z^-b-!M7Ldcf2@8x3+f%JrDOTsPW!|1a2#6;d;&e)U0*|ANY(gqJolW;m^ zb>m5;I4~2{1n#Ov>=|~dzWP3;HV9;_)I`rB6S(se_*~MOnnpN8D^!fD1PiL&(K{a6 zA8FHaUhTu*bwB69GSo6@~4G7GE7<&LLY{P9bF5u43{Adf)UmPfp?5z*+9+V-s@vbhUp$u zC)fp2+!GhR{6R3aV{`flif*xP?BjKRUF%3oUV4MH`-g@I4M8@4?1Dec&T8jN(128! z>&1ycq-+b6+`&U1areo80kn>>zyFrqT+e0;D?Ydtgp3BZz}5{zd*zi-m_6k22{gkI z9Lv{++8BEE2m}PMPXHE3Y3hC2#Qb01 zQJ}3NSHSDZLD*p7xZ0x{6sKrP7Epgb3W)&O>+c-`^ud+3h_9!{OoLLPp9%vk(+~!L zTxfxW9%J&FMz>(g()W#=R|AXex%b?{p*8`nej)VFU*k6dDov@FbUv5-0P?JQ{nz#C zU5KjGyFbRk*ca0QMPaMpjLF$`al`Pv<7Lq;QXqzKMOpv6M6eIXAzl4IkySZ69a@_g zjl2U(@^jBPd3I3WIvs2}D5<|$JS^17Un%6 zJrovInEFR7k+4|&Vv}S34N{4&d6&*fbdTsvEgBT&pq{fLP)Uupx*UUXO&@2HvhC;Q z?d$15+-oG-ztu7>Du|~0x6l4l&d@U8mppi8>W9M=IZ_J@0Gv^0^~ z%v=oOM^qukIr*ISAA`%aZ{(~PfJ&;G{aO{%%m7uqz=;G08@2~?XCx)O%n)o&)WU`V zO2HfStn=j0{VwD>Da>X0Pf-SSR4KQ1Df55~gpIEnIXLpubhA*Zb!q^u0iOTP%Cz$+ z$W}yI+_q^%(NHUE(GaViY^VPtiE60!JC5e8#NF--<60$*jwdtsMP~E9lc+N81Gns_ z%3M&2xqFmE41_0HuCYGxTVD@RfO7vI%JlK`|6; zvVan>!b;t6k*In0yMgNDoKaUNV3oUyG;;R&^whsr7}A>^^VY0x%d}yzZ#_7d$M1$^ zkIVGC=O1_HHn*=$BSX_)CU4u$yjIAHeST+KdV5QCpW$!V+mfn-o^LWV4pDW_3&i7Bj$kw zx2D@{fc+U^%JyuQTm0ZkV)(7xW?_5_P#J8&Ff=wl2sdT{zpBZPyO^43+opDojfBi{ zB%Q4154k6)-%(Eou)!#ES-o-fQKBMN#Ub;{tn#tF27Fe|hlV?*fnn!6krNquc*XZB^qznaAQ)OiczMIf42*% ztEd-%2Q7@augP+?mp<4;$3{$F`Eb*qyvCFd4jBJ5R1fr{6&;#JCS^Vc(0+}eorIUgFL2mGPxa~nZF|08Xm$bi^aLQJ)H;*V$a+oT<;3skX{Pk!&(`&_j zOHCDs-6nJPCFZB?6jWJmkEb%d+HRNRsuf`95TV-Z@1og;|V)Y&upz-Hnm8yjX1DJv~4`uQnKV(KJ+}*Bzo*GVF&6{o59$= z&j({mp!E;vg!}rqQCTkxR=riWDi!y>f@l}IcmV)NYs36uXrK>|lI4Sl;SO2L`C(pr zld4fgmkivYSNaCr&7`;^p=`#b6Ef@BD{(Z_zymjRK^PgpcVO=Ob8G5dEnS**fDF?O zO1z7DUOpKM(uNZsKAAl~Ki$z}8^dOgCi+SFa2wG+mMM|0R^@Xyp@E={Dh!F*-vn>y z8UIprp)Ra-s!MVlMfaRB)3ae6ZU!g6=jiY8|@Fl-GS|8?xePnEkAJJ(x`Z!#^q0}q_wb(N@q*cp)YG* zJTf&ier(BmfX!V1CjNL;UKrGk$bk^xbl1O>kWDAp(O0P-SOm~d#H&v%7zBN$X|&kh z?O-6B2K7n?a#EZHvl?L%8cv`M7Q3`U-h4q7cqX^j{bduL^wax~=7-@tz~2(2TdI9zozM4bxF#+mwrZ{gn-x2c8XWLCM*fkmxMi(iZlcy+v>j?& zXx*n+PJ3z;CTlLg#Abx>q-C=j`cjgEMy%IPUb>xeyvk`O+vEg}(7N((cz@1Je~%{V z9*_@|CZ5`TF&q5?5g4!$U%Pw}p)d^jC)7a~cZCwVk?Ts!XU}rRXtK*zO35-iT!}cR z6WB5jfh;cw1ruNTE?JmQ(Y*oKJcTVQeljvNp4+SguB%s^gZgU7cx;fBXwp3J_ux5f zT|CJk!qgBpFn=YCMO5~R>*;pL)MK1)tRku2^pa^32VrkWt1v{?fukvs|K$%fN7|O} z+&Sv->*z~oki%vLLy!WRq6%nd7B;IotLrWqo%ZdA@qk@%UDJ$WJevg`qtMxxu1~(q zd$2pRxSh^-n^M&#Ro@2@1hZ`rI^HQGEk1QpckuuJ-dMk?iVw6ex~nE-T~}qQ+IFlh zg&#c_znbWm<$(qz91dsk&+lX_t3HsW`E@sTmy4{x*@55)qt1(U(xr0rY( zjZ@mKRj8sQM*A5-O&K4!Enl;#L1|q zO^M8&33nWJ*TU32RiwV2NCj$z)PRP%RP%~IjcnOMt3R?_tO%w!AaKzX0A3~GpaY0r_8e2w!iyxM77@0 z3?k5We*1G~!I)8=)6_)`vw{e<#K02p3@e@4F*1)2Dg~}J-f0W}Ul=%+Nl$?KUwxYA zA;8948z*A^lzl38peVw;`WPG&7^F`)x4H?tKphq-KqXl;hT|+k@MdaS(whsG)|3`7wS2+)5#iv4QZbr|cnQ1}MF20K-K<&SvzY|_?mzQz68A4@)4 z_--rb_YL)ViP&!wr~tZuhJ1crD>r&2boZ6gm0{la&?dhL`t#G**FO0b=DjigSchTx z;1>7N*2HgX*{=*?|DQLYw}I~ntfbmEyX_4P{+xz+U~hxpCL}xhaMu&2R=rhuNO10o zt+KDk>m_p!S)oM61{*K{&MRb5j)`cj&?8ZZUl~&?p+EQ=LXHp-O>>CdN=x*Z9t6N7 z^1BN(6oBBS)HT7MBOC4CI-0fn+bQ~)thD)G%q$t8UCalv-8lk^#flJ?YNV{e4{Ce8 z#8#kats0K(%nDGSr)Ku0Z{;(9m*E|i6$l84uedY%iD1)J?@j%8B&<>7#kq1HQT>53 z*2}p+G(a{m@3m1-4UZCgdyU}z=$*kz4mY^pQ$B;_al{1++UXZrIJ!ZH7onEDfBzS| zfKosm+26AB`E$o7(nko{z&`I8`MJ@Bn=uyHdJ0gJ?@pHPCqjP7IaYiUeS2e$$NmGF z_AG`?=jnhrEZ5h@*?0Ma5s^-E^sDW#W?-buRC%-Wz64AOdS9)H$#*(x^#j$ z!=()wc5q+*vRXat*b@R<`EjQV6c#BS7R!P&NAkveMet|$5{JPWFZJ6J$gB32VXE?C zjabvSjq_zw^<_(NG;zqO(PdAlY3uaO7T6wpu$T*-L{gBI%9|(rW2$q3>Lp`QFqVPa z0MBRp>#u-zXOJOkeH67wk9%%E&=sDbPsQBX^R{B-D!S43l+mh_7aN>_{NXxMB_^1N zt9iF(@*~a0?ataz%EqosbA`>XtwWxn0y@kcBceIxM#L^}kx!>>olzs`Pey}cm;+I~ z7dh|Ko{i9L-`cV^U!U&}aUtUl4zXFljo*A!2yx_AIy*{WlJ6+ESrEE@LmcL-(Lp2* zv(rz4CcsL3d1=XmmpU|j>P23{-Hqeq(ZM5)D(mxwwqgW8Qd>4nhfkZ~*g#Z;Che)? zxpb)%ibds!WV=-*@O4tkII@l;>&q+)nluDU0ZxEUCTy@>Ii``O*x;KO=IIiESEVHN zLhgfIplu;7z>o-4(O%n?R?hdDbxI?PXxY+U^uK-R3kV+|DN(((y3C@<$UFVXUO1KP z_E!Va!!Zw>9)1w0Isu7CG`t$HzE`pyn}ub*ER3!*%Tuvc@@fMSG@DGJq?x-sxuJY! z=xeZKM?BINr@g+Vq`@p}lI5ShpwW78-F5pfQ2{0PlsR-ix?G3l|ig7hZfb;g3UaxdZJgpr>bJ{n%(ojh$bgn!ri@$vT$_aJRL)O)6GpLfxT!?&(q zS0U^zR;W;zcReRVT58c>CfPHecs6$LfdXs8mtP;BQPpZsjjk68&_qAM*{b50!#%Iw zx%?&5!i3^P1q-Go^N*BrJL1PO-5i{FXHrNT6(^a&hur^q3b>b-(Pzdzq5TIqtjxvW zuM0fl?gcv8&@kF0BV@Sxdoj&>-shRE9v$Od0Qz)RJD3}(Ir?c{0V-qhptX|Wy26b~ z=!>%dhF35+Lxb4kFYPQV)}+s7zgkc8w|@@JR=!tRM2Yr|MX4VC z$8x@-tfxBg8O|E`9i-5Z+xY$5OQQ(ArtdG~<8m_mk<0=P*a0P0#fQJFPXrIS4?V5_ zRQ`-j1}1n$&`vpJ85{rg>cW_66XZ=(tuRl;8kPtCM1(X)<4hB}Keq5*-82rlWWL zASw=UFhY}D`CU6q#bi1YOE5AVJmTH5fV8G9CLrTFkfRdHc!X{i$ zu)rbipEzT}eo9+Ga>b5bf={QiX)HhO_?63L7@S`cq5*3s_(^QyGyR?zpx%@LY7L>a z3rc|~4-OWuA`p$})1WLCPVvOR7J;R^R)PBEOi%at)Sse(+yuWamLHmPmw3VGlCUp} z@wNo>0DSja52zZsbZ9r^sB}THE|2q6F0^ASB@l%<{>D&RW=aJ${)RY5ko8pgZA_`J z6!xq+v;1XiuH;P%Plwf7eDKQj^Sv~A0MlEm>~SChA8iTt1|orK4Z?S$o8P(#2Zg%=NW?rRf~G_}q9H-kZbIMu~tB(cHl z`iLOEkj{TjQVktCocvS!(LJlbu=zm3GC^1lip0z;eO+tclEkvGKY{#|7V`NrY@94d zje5p88w!X)AR-1PX1-KUsPuW>#kp0e1uO`ikT5VyhOEbSqVTRO-hUJpWXO&jI8==3 ztO7?Vl(V|6@LHw%re{((ga8W%T1PeCfsKa|eqfS6&vUwM`ry%m+%ULLs98k@LaF;d zReW`Q7m&(tvi?!)H~hYD>~uSHXkcJ*iWW&T%PA0E2*+4Wv}NN}sQoQG+D ziI*MB^+|t~xvGd)0*><&zV;Z(Cb@~XFI=!TgwTJvSqPd>6JW3+D}eL+`R$C2WvOW&5XOaF1)hD@vl(y8ycL^ZlK%#&O!Ky>KP$~(Uyiogj| z0&qbcGwoFC@o<$8ypdnf_j=0Xziv-8{G-Bi%HvT^hF-u*3o0zw>xmGvjg&cF`M~;m zLN&>+donj3LmPYk&E$sM6MG1iPy|2ZZ(J;<5({?72q0lA!2E-6`P(AjbXOA&?5;GD zV*2(Q2zjEauMO2a2o^+%@UQ*f64_vir6E!0{2Ij!g0TBB;vpXuJ_wXRe4o$i@}la- zvZN~=3kZD?EBZkOaFnpS0fb->C>}^3lG-Z_jow+-v)z<0mZ-KQ1(<#p!$aixz{?q+ z(Tc%E#{vmcvz_yjbtslU9~}_`2~!>u<~4``;mha{a2_=6YQTSBZi;%Dr7+e7VPi8A z0WoM9aJxro1xheIA{Jw1oewdE1tY=v`3(;Nd#ONS7$q(eQA=Fz=`Eu6klJcA5Uhbe zHV8Cmp*d)!eVle7Cybbb4`O2hfzoFm5k69*%a8%$)t>UMLGma|i z{iV`mm&+d#)&;r4>Pmn>o9L1AvNB|eSO{bzc!+)!KY+T-&-GfVC-P*mtodq@MQ0yJ zT5=xHo50t*W*#DeSyCJmERBOI-~uv$fIw_agBRCJFh?I?UoQ|eAd2WF^sC?&eL!>% z)C-g`4K$Gr?Yw!6U_ep8%N-K@1~6NQr{erKMIyov4i}B8SZSx>!XVHt@TMD51yWUb z;}fbQga7XcQB9pP@pX_N6b95=8#ka+D#pHv@H!!oM(0D1D?U1!7rbkWhMQMEf z5_;bOsk@SkDIcqmYPQjaB2$zjlPm%{Fo3cJ--`?j3?n>_u6?4WgB^O~BFv`UyEsj_R4y}H(bY0p+D?iYYFIQ9y{2932l0;2e0H-~dmY=PbF#L4n4 zU)e~}S|UqR6;n|K&!w;^{{pz5Y1f$7$dhU!v=ayQ z#68ttuS_hyT2H{$cwv)<0vifkiqccW8aS+VG-bJ%(S?}DT|?vHUvG9o)VO7H440AE znkU{B@EV9azi&D}RMg=?bZ*gzznnmwWoU{rym#TU-i;ZKxcAZ6I+@3^X8^t03^Mq5 zI1(2tCq+2@ihh7sv5tj&b}!#}XfQU8i`CFexSuDGxm%-G!z|ACCm}11?BsR@uL71D zHcc=H&IJRw-0xXC5AbKFHmWYn><_t8EtJ+G_U=EuS$AZ|UkwL3oGe033HL2cg3UI{7EPKfL!|MUZ}3IO}m2NRQ!F9Q3jc!cUM-nI-k4bM(n6_J<-3 zZ61F(a}ts`iIs5PzG3KWdVQZM(Gzcx7QiWgGH_3Ux*N|gOanwr4r2$Y2#j0~E0V@` z@o?UEksN#mB{(2NziMhq>3{9Z;m4k<6=WnX?*W*QoV@#CgnF#e8}JQNdZGN9_J|Am z%QlT?&kDTo!XfK-^*RM%tO}3wn7$HXZ+G2Np^}WXp=agYBja2@v_Bo-36X6Yfie*1^8IPzmv{8a0dQ=@@}ep` z|8?jMeF1Qvh}#UQbA?i5t+5EVO%Z=KyMdWmKw%W8GM@tpXwnHwbflt6g0~No#hrH8l;Px*$8GVFaldfWF|s`!7Ek@iE_~vkOhVEgTOsv@OP+dR@zuF60@%J&=`t zf-_32khXIEo*?2ZRrD-cC>uqlOTOOraJ=OKAMS!)ip&Sy!crXzW2ufaR!N$~db5bz zs=53$OZeCMvuOwUc`;{yoH-D zDZD=t_qBYHIsfo*oDYT0=S4faZ|}CXDfEttk0H4Ew`vgyg z*TfBVeW;j!25cQ)fxI`936`-AK3LiCZ$C9w5!Y&E4CPZY7P{d-JFzb2-oNt_*e#zW z1vj?8J^|Zdo7+cJC4Z2F zT`@3D0%cT;UrAhUYDEAFHV-gScj7)WgBof|Vwz^5cmtc(kMZ(+N(^@pBf_t5jmkOo zcSR~s@#hoQ{<-Lmw0(1L=MsAAD&CHUZ?P88hJb#CJ=yDSWv6Pgmu#)&c&*-8a-{gl zm@S%y_a#w8@ z^MqVfRL*@qw7XqUWkb!VeG!n}jv^mX)G_XO!Vk z)tH?x4NN=&V<9CP=y9sVw#|s53zC{QaOv||IQy2fm$o>%;n`qlJmp#P?3EU;s6O0(z@YA91?04lx2Ql8#2)A9fNhEV z$tgL!dBzihxT&9)G-P@Cgz{ zAZUuLPIizR{2s>(8v%eeVTE0z5A8z4X`J{9`|<#N;`OmEH66ROJ9Lk7+s66KT2U7U z3&5$tKi;Zw7G`*+x-6fqWv&oas$PD4@pwv%N^>m9#6Ad~zIzeICM*hs!Z`ZB7uI{` z4f^|7>w-URQ{cuoT-a2MvZz)OvX$ed+_eNZSEEvVzhEWc;-eVfq(lSU$OBx#UXeb0 z>WS$@oiQDf?M?7d!$&0%Z31nb49ZJ(x-W*%!>Mj%Zq}=H7Kz=laTO#WBK-^l0>k+- z>ffC3>YH~V#F0PAjhG)Ei5gaFS~ROorF%^qn>qW6oq%01QOkB147h5bLO zzA`N9Cs>=7kdhV<1PSR*2|+@-yQRB31nCr!?(Tl*5|Hk0knS$&v-m&fdaw5z{I+Lz zXD9BN=My&I%fAV##<#~*TC-Ap*t3>MtN^Z=4(ETOQ0p|a__LIl#0 zm8{5zjRKC1CeXgl`3J%WrfvJbT%Vx9`}9*x)pa{jvk3(GyJTDEf;;}_f&Y!`(8EXO z-!WyGfp3eCHX_=>7uq$7wv4V(c7Hn&;Fu_z@e{a4lxYTV?;T2wtvrLZtec(Agg;KO zIoTRhmZ~RzGZn}8f*zB7%3M7oGI7Q=ZA<#zKNsjN!>=+4B7Z)~kLt(kIXg&&o^GtC zvcgVfswXnHm6@qu!yeUU15F&Ors3X0v-K)@d%PmL-~*c|^g3DRi(#sU5S%1Hy33Ym zrM~`C;O?0v5A4cEW-ZFXz0;So76u3?8N6FP!s@&NiJ@jLclAAo+H7j%hbq6Eo6P24_;>^- zvyHU|bK!ElSFCM;>xS4PM`&!O6^9ujIs^vwz8i|UYIV8LsR+CZ-{L*0RWd$X4G!G3 zV4wC|#(W4H9sjYvwQfyOnCESE<3^n}P%A}`~C`q-dVNqmyi5Hds z3D}`u8TkfQ(m$;(4<{w1P8z^?XmZ1` z$=w)Wam48hc2r=P?2yPz^ei}CGn%u*$znnQK6>--5>M^1X8LN;GxC*W7s#m|HGY4& zh_!z>*#XCjL*Adl+NjSV4JX=})JG=Tb#U&t++(P-ErFa0{29gKYk!?ckl=!Ci|OShI8?_YrAXZ7p{ERz@5r!*r4z{BG!m%ch%MWDSI4glq`nq&8l^bI>;Esx8bxkbqx@dG|FpVht)w z@6C%m-S~&Ei(+xEiZR)KaQjsr&axR<*C-oD2t@3x)y{g?LdTL`iZbyMl#IV?8?Br8 ztw_~?Xbba>9L{*v#j~Y3J|rR6s@%ghGyczdc(?an{to8^7QuV6td-K#h$7s9%a=Mw zUf8nmK)+Bw@9X?*X5Z>#EK5U@TPi^}!wMiN2E7s_@A49sqkU&SO&Cq;95NBh!U0As z^;=L3u6@v+F`tc+!?EM;@lzEUQ64lO8@16+7fy9FOr5nM##zz2B^Bc-SF$0Qtt$ES&B~F zV-Joz7Pq}8Q3MdMDi6Ht>==rGBR2sLGb4iL)ldz{vk^!^3L3R{ZxUtcy#^YiVO{iE zzn@IB`xupZc?JkVDx#R-{nZ@&HQ_-7usGr(eVODFHqfN9y9*Ko(^Kx^A^{ct;&sFO4Z4t@2 zO3sGU9nHZo%!7j8?0oH@;mBvdavRT`1r~oBHqXvxJfg++igLC$ z78w_~?^vts*Z}{O}Cn>ZNxFp4!E@AX8cz+uDGBNeWo{2t7ODPfsfaGW!Bb|Ja zPo(rkr;Ibp+uJ8zp~HYf5~^3MUVFT48&LWs6+jROSg|HX2k(LLx;Wq%2)<(qGX6|8 zuoOL~Bmlf32aX~t#h>VJtYvM~^>(L%Cg9yA#cA)h@V}^iN*!4Dxwa3i%8b4N4A9Yt zpp;`%?WQnR{#GN)Fh=hn-+OX$2V)LRgP3bX^7P)9wB)h_OG-r8bUvd?Wc*j01Yx^| zRx|Is`31?ysZ;Ck6>Ej+1E&ZL>29y1{C)Pm(kX_#T;FkR%cImanP}h{sak0hVP6bU z-xukEEPlkzkp^Qvsv%J!Sipu(H_a?G>^Rj723(JZER(M<32B|Q+XD&kA+8d4?2LJN1}=1mI)TmX2)Tt|otc?b99 zQq^~^*8zksN^#R4jIC@=p~;}@fu-Qp)-x1C1bai|Nj;5;9--oohW|4>5)n=egwSvy z%LDoC7xp@keR_j{A9F9TbscuXfb`k-?Rr_~j1L)rR`q_5X}|CiCk+Sr4G0 z08=_Y@3(Bfq8%rUXLW@+XotYea(|)FO8Zi_^Pz*lm-P21fXnqH{8v@cQM*1-q*exZ z8+6D@k8O;G)Zwu5f+p`CJN!~7oXY+$$u5f9TDJQ1jGFn5}_G2ETaADHMnUNMhi z?ziB77O_Kp>NXN)fI(k}$@w)`jy#RzXkHLZFw3K^*H~R&Kzwarg+9mT|6_3v?VrR< zS@~4DFY(EjE`eP`1;^RX4D&5;gDMo{>!)W=M@T9&jpP#r7IXqFTrNdBfE44?OeKR_ zw#ddy-5Hfit`^wt7Koq-(onJ8&G~w>3x*Sz*?Y{wMC#ve4~=>h3Sab`UA&D|p`=it zx$ZOyM`LF`fcRG(6C7fl?*VZ}pf<&o%+M^&Z#UTEO0g4E!;7BQ*`-85{PS_6cX?7P z`dSBcXEz^oKU=f=p@Aj+URgMY=e8N;yRr-R-fsp0_bvdyK}RP>jdhJclYkR}8G_K= zr4#*Yn_Td>zSdW`OtU?+))4~NC_dawi-5VxLnmSqJAbpcM8+at9El%-Vp~LB>23o} zVOZr85qL7cmy-W*Iq@3)+H~msX(QQ0I+10(DVts@%wKO%QqaXQT4Uk8D=P!YO+$7H zq*m+@Zn<~A&CtouhH!18Rcevx=%_yCrGn_YbE~h-5n(MNEVB8?AG_P@p^##1o8N_Q zD;i)n8cU%BOxaRM&)6|kEgQ|x>$DFv`5Z0yY4>sfisUCa|w@2aFcD$WGD za+-FM{8M9AbG|Nf8q6aVwg~caQGq(#TIql4(9a%&2L0@ySx65(3C>r0W^RtW$3?DS z9rZN?JGES0Gx5SvP!2r!V2d|9>Y*Ac$~1*u>u%d$)Xu-0fcfR~q3X9L8mxjIUxvRBP^r4_q0`BOzNBc&Qjmg2b5pp`3Oot^IDCS(=#ZC{Twl&uv!-OAw&H99- zD8-HIcL)={F&=I3e4%NYWC zjcbEV>LXq!!b`z#h~5gjrds5?Jk(Y|=d7dbue}vu ziIOZ?Y3mQK3jJkO8 z$D7}$&8s(IRUjn7%^GS)go?8+s+bM*+2OfrbzqwUUGwn^K(q{Po-X;7Qb{sf$4w&_ z-6=Ad&c7(hmXk}Os=L8*S>l&ikU&G@eR=l^Sc`05w0V8+DP-|S|Gpd;cC11jMD75k zx_DkfYnP{Hqh&D1zeTn(@FE%&@WI7);DSv+XIJe>hX|PZ{ao90Q9UfmPuPUwGUAg~ ziI>;RrC&hVL#1uh3HSt9b%t5}0ZWw=WcGco+7K5>;56>97fHQ6slg3u=`yw3vmcf% zd(GTf!@ zr~DUZM2D^_(;sO_6~Ma0qsFi_A7Wyd z^M%Ib70a|0bcCQPrfFP+SJ_pIa+8VnWXWM4qpXLFV=DR9z9g59kXd$U#pPFj*Q2{% zivC)A>>OzJN69b;pW(oD8l#6wfrJl^2OLKWx}pU?zzC3}mE>^gq`x>3`@?s&zTw!$ z+nsM9O9r+t+$7S0ZeKBHNw_4p0%l8k8YS<|l~%%tsbQsIJ!L%M!+oV8-8_&QzncBT zn*#2@3P!T1wiOQ8F{?p;8yK!Rk>sIDRzWq-rrEw#jvzB&QPqdV$J65h%7;7<<101( z^8F1AA|YGAAGzXSv~#jdz%#=b(_IYXx)q(TiY`ugd(y?DI~U?V6luM6To~S}L42rukIsz@BufuZR%YqMzYEgSgOcUjOXAi}1if7! z^WUJaIyxk2Zixg~QetU}*$vd%nyC(Co2*^Hebygs!7({QGCMu@2=C>N|zP zvPU}yZm}SlZFgBJE6-t# zE@fB{&AR}>FpN)w4;^;8k&+*0F!fTu^IJ)22)(^AEZlOJ;)X#4Gj%Dk^bt}2uG1Cj z_SOoEJ20zcUV4RV-(EdwYL$!Rx;-HtcmnkF5Z|8q0L2IBS8)OfXCBAbG6!M#D4_i| z-Dz7%_+3K+>UoyiA9k0uqo)BX1@48$Ut$pGd`}Ho@Si5YlgVi9L(e4f^WktD3{hf)nHi+g`1^InWoCOBoD2My1 z&pjd|zl#1rdWMVlMZYolLcp~GQ0!!&onWRAz?+h^XQOA0Agjy8DQMJ0t+l0VmDlrU z?y}rSI5h~*`9@xb*Pnc!N6idF1O#j6y0(Ju68b2n6)!&?l012tV2>KVy>pirv^K}* z%hP&XdS)yQChc8T%@y-9p*xqnOCa>-7N&`?&Zz6`I~V>Ja5nvO%Q9HE@ulLUOR>}o z8}b}JY}a%*iSiatPZf~CH;Yl6Th2I33+j_&=zfS>R2-ba%d`G+0?7~%X!y8Ip}dv} z12{EW0u00Eu|kZ+PBAj?d+}zIvXM+Vx%o?o#OQ2}E_Ja@z#Km%eAwJ+Pel>3ok#bT zdC&^#ebH@G{KV-eAn;@0fnZD}ZmrLK0Q&?Y?-U?h?AlB*2e0 zC*b8tO7Bd^FWju18XBdB=zBlorEm$H@tqGDPe&-L>?%xax;6poVaFWK`2X`XLFHeh zZ+=Gg>gn0>a|XyjM^8s2`RjS$X*CDZdUF!8f=atGnqAf#^*yf-0Bys*jNF-EotwLB zeyVF?AsQJ>csH}K>hje7IE66u6;kf;lan}PQ68eNO*La1E1BWl zX;aqf*3*fFCba~ia?XP%7eRm*?Q!jE?N47L?uAq_{}AUH>7A2z zv*R%i9EmULEhWH;Oy0Hp)yABhn~~^hUbIO;q+sDho+vK(GEiM;)Nc1>Rt--arr)n%G<_iuU!hWpsb)I#mMpf~uyARRU}sNLUuDZB)bXtH zl~Xgo!881Vd>7tl5ph{=h5KY!w>$@0bP&lhmjX_f z@=rlKL?7rxaA>3Cl6~|vdNaKQf`scHV$F^2XlH{EDW|?)IxhiNozVey$h^UyA z0p5LmExZCs!PQe(bFY1dZ!f?en2&f4JrfQ6)8+2%*JwFNG^6=0j;(HjgEG5gM^!qv z(q?8xa>nvqHWZ<3t+ZG!ySoVWp>tTcX@YqGw{Y4jhUsfgdDWd=6#m9@SNwDXXmYn{ z3);cEFtcGY8WXd?Whf`#EvSH0E*VZ0tdW8mZe zdPdh?v=fXcnK{hho9qF{(L+Tq(H~)~eS~dXvUgCyVT}y_02??3e_h)n8sL8pQmEce zs+c)vLiQ&RXxCwu3s@OM0B&Q=5I|+6CU}?3D#$N8qS{RZ-iCt%dl9iA@p)zak+V_- z<EA6_$wxJztCe&A=RxkdD zn?iPG@P@{@pP-^A;QMl4TlqQzKoNcdG89QS#UJY?OMnPr(NEgF6sV!>?w9#E)by&% z0jZ)p0slyTH$vhiYceX-FOr(-(2EZ&(wThWwos&elw$C^KZR5E4Aq!uUgYaWjg^dm zXw^RigfCF`fYKddj6?iU4eC23$eT8Uc4YO}1;m8Pf5Y{-o_Lb~hR8k?t1H0<1ft8K zCY2C9C{iX6|FgUIs5P@DR2mYV4?=}7iJ=S!TNoSUJEtMU*4vht6!9K1ysCNJ&7{TU zM=zwx-~>w+=hajfdh37UC`G02yPM&4z(Scu0waXe2y0oRN1Bs%Zbe4Ikx}J!a=}@d zGQ;h8Of$2Om;E=IO;;ZZ0Ltjj^@b5@vD5$Fp9b)2ry!P8RM=zCoE~#Q6@-#{TSBDn;BV+C(bl#^K>6dy&&L=V zabY|(D0{t+3cvqxE`T`#D5cp2=f-b{SkEve|LzWnkENwb92!UMS*uSV5m6Qy0uLET zPz|t6L@j2!E7)2{!z#*?~dkOMI=)gL{I2rRYmfkj&;ebycp^BFD&Ma!mz6YsRH z@IpiP`QQJC7`u2yM@fJe2$C%BW`|W{xILHR2UM2QmEgzo`~#$^9nd@j$-c&^|9qY4 za9XF5=SLXub#X3f;n{kiwA3yIE@;mGw1k+HEi>Z}Y-2d*r!{NT2xy#l4A4%YOA zSw`mxC4})j_E@wK+Kw{n(^*ggbu+4SDA@9fxMQw0D1{(C@cv8&?V&S8>Vq^W6wI<6 zYa~msGwZ$4hM#9l7Z;Iy`TUtlhuE@^JAVG$Kn0GSp)T{*T*!PdF%=&R#GgV1r<$2` zI09x>=--Tx88JZo93k(?!&PWw=FQFm2WnshgOsp}+5BglEg}J2>*GWq*gh+gVfgiH zhYZBI=Nc7)gf5cie>}^>TW#FJXP(Lf{VgcF>lbAJxa9V}42FiXD7e2t!S~>Kz$DrO z9PBB(3i2rsel+#)UNs=L)Nk;P^f#m+sxAKRvT}2-mGgYm>tDwoH@?VBf`FSKkQ5m8 zn;G-Bal)xR8_GP99->e202c-ItKqGj+~h3%V~>Zly>cf6KhXSmGCVY<1hx7}39KzZ3RIlJ zqxxF4BJBHX`2l02zdhND^8A?vwNtI%Og*{y$7)LdRH_x)P9a zDG!wmA!Z;=AAI1y+e}#vWMzqA=>fo5A$V8VWb@A*GodKDTZOaGlL$CUhcG=Kv#+S& zK3o5^W1fZ3nR^f{ffNkj7Dr%dTFX&zPa#CvF!w@0ULHBkpXvcS&pYr{8NmEm>hXF= zC#tL*ksAY^8~j2N7#w0N;LKN^sB(fcMtMFjUwOUNV>>+y2HEM0!tR(gpZ5W3@obqd zgxNST4o6?r07LZoldW|BuELhEQIVPaCZ!7LKj7(~fZUIE%#@(o;aLCks8El+vW15H zO5I7g%GNL%>d&8kG15m^nE`AXapM1iP)}qezi1}0(c@j~cmE@#|0+l{!Hkd?=hMnK zNd{nc;J}5Zt3IT++RSCvpK3nngWZ z=H5bM&}JYEO-S}p30p+II#U?Y?7OJD$;zN#GYnGB!NFkSjh6t)OQG3RH)D>wmF*Pn z7{(fE2pz%DUL9a6+OYwsg;Xf?D*fPyPF`*Km-te>q~Cwiyk{^13o~D+Ez;uAH@(@3 zTf~2^0?BCY=5L-GwXaC?AO{t!)VyVvO{4?k)&?7Mf4+cvOb@CHA%mjPxb}>mG&EIz zP9uA~byB46f@0ecH@!LCG$?}pbt02aM7MP5p(+-(d7vqzDu*ek zJw<`~B?^TeO3#rTS40vk?l&<37V`0)n}9O(17rO&TjO=c@& z)YRS*$<3A9yy2?aDOG=zJ<^?WQNV=~y%hhb(5RchOR@}w+Fw0IbkmYO@uRisUJ5i1 z-ax9OLZ<5zhNXU$ff+ek<>kbEO$eh$ZRdrq{noMSnAc0<1fdANfMG}6v`?A1QglP zH6s%wd^pML`?)D2ENfW;Lh&(Z zr%^L#Qihv1LRdOc4!bx)-%LcA%0b3}n_eF%!FP2ffF#PRZ-fRdpq17?f*w-D&SJ(N zI+o~Z`1sB>pq#kQkUNE;{NKvfD>GMcrMP(^zJMWxK>Z4V4a2Qn_frQA5$T~u;o59n zhTw43#l@mliEy4*Bev^S*!Cev#NiqPIoN=Q^pC-jKQ2^~)9C@?ZeuJ`QuFRAd^MRp zw-8h&JXDTFyvt99G_1n8Cbo25z6*rNSAX0uxS&n@OH!96IvZUz*jf_@xD~$ckC{Pd z5S;mILc@d!U!Om4>4D~73?>Mo56Z5X)wE}m?zvd~dd`d~T z@-Jf2u1`yng}6tbzxD?5y`j*4)C&K6e8+v3Xc`y{g{Noq4k|R^uW^dwe2!iW=GD}J znZ#x=Y=1Ve0Pk8q*2d_=46J}dHR$^SjOC9^XTkG%3E?I&ymo~=JyJgy5ljq&ra~*`pq10YsG&4+qNcI1xroC z;@-l};$O2rLfL5%agFJ+dS<03)@dq5P+-9N-o%+8@$F??Q9ZNlYU`Joh|feB=Skh9 zjd0_R`!$ZH+Wfof@%dv`^S8yE1@9dxJ+n3?iRZ@DJC(3}NOUqRdky%Z|32tPFA3?{ zWjW*r<;ndwJ2Dter7Aq-{}%M)XKU6VN2g)+KwBF3mw0W912ZQy$DCu%G{_DLpwM*)*i@m_0LEry%u zyu+GUT8yYWe~dOZo6jc>cuB{I)^9p?ee_VXM?-%kY)N86%P0VL67o32+hOTev}}-c zVcnd$)%2m6QGUi(q;~(V-&bQP9>qBDd<_GD@VI^Aica53`p_-u& z9sSh7qt>k&HkLiK5Xvb3x=x7GpbJv?)b^POna*R&b$pS^3taW?TM7E(IQkOWf7Sl@ zbN=L>3&|@aaT%TZK31>#g*e^^rFJeZe`GdyAjb0S?o_H5T&S-O7q4-4@LBe}+%it? z?7_7td}4pwfr>K3G&eqfgYi()Xmk4DS<>7OPV^roMgBd#q?j0L%NYvNemZGSR+-+tz35p6OoSCDQr)4$`72ZcHN&=6C%y2vYlm5^|3(%< zSi(9=C?K3z&CksteJ0PADfg_dy7X3`E?@n>I1gdJ;U}bPhj|P zvyO*FQ;L^l!A}-4!2d4pGua5=u4#NN zsV|RM#kbxg^K;tuA$79vj>T2>Dpf}qgU4{n>DEGtEE~j+x1TwGuA59Sg35o?s@@n>Gtm2g@1H`7uZa4|kV9M}y1v z)}^jaxx?7D(&?+dLpKI5nU7mFz1P82C;i~iF|CDIaXb{(Sh08{TB0AzvMsePbAMb9 zFKky3@*&Aw-r1tEmQ`7Ib15qo$I0aP>UnXx_S`H(0Me+B1dBH0l2}{vrKw}!ZLIJba|IhQh4RQznaf-cRw{iK%^gS@`t`frq5D8SMv3`Gq*zPvV2lWd7A|| z#Gik?xzGGXa)MWu6VuU%u)5*9V{3^>BrH-8DKIkp5g*HnObQWv+JO`peq=n9YPK2< zT6>B4Ii5azD+dO2-asVr$6X`Eb>U}8gmTB^BoBG`KSF(Feb~<8bB?Zc*)KX%`p7Xg z?NvcSg%8&H#_g}IMALjA#fyUll~|OTA(F-K`J#=5lC()sPMd>g7P!Idu4*F}1_n@wt z>Bw!VhRfaMSn1kJCnw^a`~{?jX#)LmvL=e$^EohU48N%Pw%%)`Dt!$2ynk<7ug zA>Z-JT6AKER4;KWsl?75W@3xfxIcwmWLf`^1y7=ZLS$>Q)3$@B%_>fi@d$|o)Pp~+ z?$J@nSE?T_vzM(=Zndv(!aiJ(b$qhc1S0%fz)WKLI1_~B+eKHh`F7+{NvhfE>16>*bs0PQD6q%QtEhs z6SWPKH5gXtcMhmu<8?af-Fa{f*iyjUbb`*Z3cbxr{fq%|r+|y~jzK7I?&X5}=|QlT zjxT62S{aqyb4_qZG-awz% zu#!!IJ{nmIq;X8(Ub~~xcfzQB??*C5u#{nSM;%xF$u099vz8rFov+;2gMPgAUizuE z`paLFagS5V>iG9l%6A+atGzhQWFJj^87R=Nb+d903wcIKTfA>{7V1!tEwh@TAGbUB zoQHFSr}-F%AA&{c!kurw^mhuJO$ne|9$W8u)|M+rJnVT!9zUdn00O5|RW5y3Jv8>+-t$VUl2J7UjHT%{WjjJ7xah?bG(V^)T69Ca06XK{|!j`o%D!L%;u` zw<B>ar!<=o2xXght;!0V~KHJTc3^v z-0lmJM@T`t>@L;)T`cSMXLL9l`Ih~&qp7*UFWjfh*arMReEe^6+Sf}6g*P);b_yE} zC+0YV%=Q0n=PPm^D#(K;8o%R#@mnb!(=_Ikp4me?p@@^2WKF8Y)y|>_`kr};1 z5%U{`@%lV9-hF@8?aOTSPMeaEeEtGg+cgDTfjw?%Qgq1n|N;aqLrbf!@v z|BWU@##RhH6!5{{GbU1h5a_LB??K#6o3JB2D9IGj|BTb;`4$X~_ zdD7@VkDd`VYA^EV9MHsR1TRv4%?|H!;r+_*^3wLMs*TW=6uLuHdWZA(eiFk?^x^;Z z@^AC~5?hVv!1V}iU0IMPRvTxJNexeZg;d-5fre_|J+c#UAA$EKK#1N-N;tJ@LX@LL zmrOR#>NJK8fq1KuYs`Y2C9(Hel+d>A`!w{Uq_K@^Cr$1&jc1{8JB*7)MoIZVZKYd+ zLaTg5xg8t4LBX`AWIobXBegdDBrcoheeD#SrFnC!p<+CCZq=`r@tfiY(v_J`hM8~^ zVKd>^#^+;&_tR)!Z|!Y+R?S3itSr-Wm+iAqIn#CuA>XWjQwYuQ`>WXUZG|CV? zKT9rr(bUQ7KX3Iiii6m1wUWOzdhiK5%x$<#Aw|A@Xn94sQFQVu8xCA{HU9RaLZ#L3 zQ&D*0fJgE84ZPsoki!UQVP_7@HG=E)^2JDx8_w&7)eR_(V5jB7b!5HN(hJ7{ngNMXy$sT8iuPy?Ac2A>REwxt8Os*z5zIf5U#tNvypu5<3gB z8A~^)gG)TJS`+QDjl*q$F$n5mrUA4Nqo98<` zi}3IZMhL}Lp+ywI)XX7aWIVjSb-DBD@gZ1btTAn&>Wj1pjo6|S&^4%G0N37L-KV6l z(v$-FlK*lPV1X7}X~H5+7KtO{{-ZKIZG|t>2-3=C9=z`QOMHo(X1HB=o{5m*@sJPC z%Yc=3YJmrSu5;agDQYMB+^H$WM zFBydbQuu$qL~l2SyK`uq^Je4@%ZYsKWKB`%Y1+}n1gum)N^F<=ohCP|R(A4K6d90v z4S@k-3!jC@$>6(jT_rgAN$Ran`(kH3w$nt%KN_OH$y{7-ui%v}SP$&=pr}=Mi3}&c zkqAl{g=q=h-(MQHsnNcaX(n&}eRVp$=zY;7!8>qW?osv~tJtiN>Mk&P$Y{ z7m*jKYFTK?a0j@El*CdwUwAisNu;D9f%ob&3H!UIisc#EN30_9$zu>>6(4n zl(G%k{0VqyOJE%B!gS`znM?K1{M-^76&3RmKfJm2Wi}olzxkYZSLZ0n>5=$SccgrKv${ZQLP7I3 z!ekjEYruLy)!{r~`LRR0rf}t{iCWp;ith`Jizg*Su-x0Z``af1}S3PbT+`Wt$dIVAstUy{CZTU&@ zjhx%jMFqO+CZ@*Mm{8uX1(r5Q{_d(*hy{5}v({NkMEW47oZI1RF7lEo7YBYTg$jly zdV+o06&!tD!o7oORfZ+Ur2^*X3o}0BzJLOe)e*#^O9BFnH`g zBsX!FaqRJ{%{gKH2tqJ5xOtop{WfUAEgCdId|hom;h7C2zftWvqs_sGq*grnw9;-4 zUgP42HzOD9$!*j&XC55%^s(msXy$xM_B|%xN$zGDr0ArnO>W%sSsPuS4CPZ+}e0Vp-qBbx95?it!H!xB&^ObM*V&g_ZhieQawC25ZBsSz+hWrh{>s^Ev-h{+8)E;_=c^Z>izHvaTR3Bpc47U&aJmgIZt9zM(afe7f-Gy%WN4j3q$7nD2l1&gI+Gic>e%!a z?!j-Xf9R<@jSzELiYwnbP`yKD9q3QIBZ{$zUrLN$oPObUST$%!KhrRhAh^{Ct~+a? zIseUW%s1j|j-=KyhsGwc1%`P3YYr>AxfUM_)Oa2epa~BrN_h;-%QWuQmUy&^m(mW> zJtfB-CTKO?p2FmR_>-4Qk#tyl(LlE2rV{|92+2TPj;wR;);KDQn9YDa4X-ZwHIc_8 zwW_{Zj;DA5nYUV{)B{rr5RF^5i9DRgAtqkkTwLh*L``+B{+RiNm%+hWCTPNw z?#=S8x*hlVc2Q5P`@z~E&%)wgtE;;Qj`BXB7@c~#*_b=CnD8Oe%PDn0>0)wBaO};Q z_T((luJCv(nDjjIC(ZU+ttc_2Z<1Kw=S`4Oytz(mYuwFSs5-h4%15W#%AJ}uMzF3G z_)$UM@UCe%_b#SYC|UK@W&N)xymy)Kw}|xo4P;g}ZN+HP^RcS)ha5Yj=viTJ!bg8A zEER8Wv!-t{G5T=2F?=j~6-B%}H1P`m%d|);N2}r*q^JLH<=NHK?LzKyIit^7wF3&9_MvIxW$rhaG6w479bA`$T9_;*7u@wv|bmJ zR%YcubK&jicQ83t;bWRqUD#w)ISh#Vr8JB!gyNVn0hK-izXD$h$u3QienRYg@WM{j z+r!(k4w1MTmu$GL+a$5N{WH2(^F&-FrnnSQM`c_(rs3<=xv0CQ{MpnO&Gok7;WN(s zaox4VaxQYv;+3#TyyM5ChhojpH+!+?CYKk}o=+p?IA99NUuw9qC;n=Tt{FcJ(#8Pc z?C-Z%kd-8OcxI)KKRksNb7f3!8V&hyVv-H_fv#}Ez5szZilej+o!xx9BQBn4^|6ZF zDjx@;$heXYDv&9VHh*S3RBj#9 zU+rdVI4EQ(uencSS7Cttjd2*Q1Koy=S;_H)q+Zvw$kSf4%z8LP_PBpy@}F&Ic`F|$6CP$_j3V1SRN3M!BYk4UJDI7P%W_Xz znH_6MV3J)*#kyHAXRqNiW}I;M*22^vt=KUvrzslczpI_Z_uhT%BXJz<5VENSmUVl8 zbvPoSQDld3b3`J7j>GnLYXWE>Bb?{qEskwA!A;Hc1tKU5z$%}BJuGge?mg1#s{}+z zE-J`7535|{!}URro?ufQ`5$q9eur5;BDMzZ!Myd@qvx%scp9SB6;t8}SC@><~M znDDvkZVEp|f^}H!Z*xQ5`eHsn7up;+tMj2|Pho~7n-*HBN3!4Uc)%!}vyg>3mt2%S z)|+Rm>LG-p_d!&<@u@05E1-dg%IFi+5^lt7MZIgdC`UlkJeCGjXGjg9?dIdcj02gKdk%7!Glv%CjPR~r9RGj~)M$L-PqwD4nnzg1>_*WQ0Y-CDb(956>>eU2Ar!cm z^m|^ek4JMtW`TXiKJ7{REAcupGzv@M5J#WwYOhUR-~FwsRZLki2UZarc5k(@-_Cyn z8t`n{5a~%F5Veg}=1ril=TOn{BJ@LVrG8eozwRo!Y{WSzU?1dWdAJ*HBvc?cPgi5` z<=|?wy|yd@j+e|_oXjqVnM$A9)5BK{cU7WjOwXqLm>B-zP{Nnhj!(V$*_(_UEr&CS zoco0tZg$HOr(ga(Q zXBE8u9pv8X`Z|_@nCEtH_1@QAfLgVU#HOI(O?||<%tCKbFJI?Mb7BNrAhh6lBwt} zgqEUS^x<9s@Y`AkY!W~b*TBT!>$7SoO})ffj*TL5NGZr941@A6XsFs(b~&=yqViha23dN_u0qw=&@3vE}p07 z@-hy*5~D`n1>CPsAht-5Vj)SVtofkiY+#`=%@o;`k(6szpW0q|v_^++>kN#wgE_uKP>qoPWJ}v%nMVFq;WGmOvn6xpm zci=R)yoW9xSQohe+%sZDSni)aE6PtOjq?+op5L6`$7-|P_Ic`lmBpT?a^FRT1_U0w zvb%Avd8a)p_ip~8H}a-PdL`Q~Tj@`V+>B=JovUteUi_hcb}J&OxVi3pYW3=Oc7Cg{@Rytgehk1%*O?`u3?;R95X)F%r!9;$w(0XWd4dkBJ-|HbRNRJd6 zXh-0kqJjv10?(+d{OK+deHjw?i5af;SR3Y$xkQ?<*FOrL)i5K|!riR32|RnapPM*; zDA7@wZ>EuMwQ($&Jtl1J;$YRC+Ml0F2n%Ggh(KcpLLJ^$H>p@GQME6uQ%{~_;IPJYy+kQoq<{H?3j=82ZqyUAw)mRxqyMY zKq8;Q;)@f>zhY?lJ{<)$RoO;51{OE77cd5dQ_93_#jiBHi?c*TO@05=$Ck=0n(TK| zG%!q>NM%see|r&NDrs8yp`icV_(4Vb6ih`+KqBKC=O``FbYv_5@~~ZCMTR#E7GedLi)pPXI>RhC-_V-fiaI$iDyfS`C7hIByno(-7VUkSVWs^B z*e^pwP*nA<&i?B|F%kt|P6ll$H)|EFC^EsJfV_t&Q5Ih=ezY5WVJl&ph}D;z*Z+pz zIOJ)%pj^esb%%`x?8G1Eazz{cc~TzwLuYo?3R6GXil)PGvqf0B0dQ6ZSI8lPj_|xw znmC-)!`~>}7IuF!`)aL~V6N+~Pa`Ege!UXjWO{y&=INNGQ^auQkCq|Uj_Pb%Ky$H` zVxw8_k$8SlcRqcmk$o*@YF1pqkfs3X3S$WCcH|lYzb^Gb7df+UQ~^gA2iW{xD3_5U zhhFyPURbeOn7B}Sn@^Tn@Ulwfd~EXh(|rc|5t{BUIxX_Y<)OFw;Tqj`A0U~+ZrL9s z^?f?22o^+h0-L<$P1Ryi_c=nh`$5KSsS6O6zX+_aiIqn`!Ui2;pBlO%F!U+-J%9VL&I#)$H0voJIHFWcri+e)-C-N z;2>89WY)AVQ;+L=P_=P=FWM-7V`FrYg_;lCWV?S*1{_VdWAd-M(440A$;0Bh=qKQo z-;FG^>`6kV{Z|@TaFSy5v&9H_9!I9A*Q&H@nUZJOz~cv+MoT~}^m=E|ogmpaH4En(h|{>&;a3@7d= zus?o#hMM?8^E;|vv^$RpWBvoT+_E`Hx-`=9*aQDfIA%jyELcs3EJK@L@6xyQ)uE8X zb$a+u$-7lMgZEWwFR>cr=2~yx|C>@#G0)B!Z7>;@`d%})i_Kntax$9RP1D?P3;j=_ zdhz)xmVZkN@^)MQ^*=OSbwJe3(?1lYOG;W$Bo9=&k(BN(m6q-fK|osS=x!t=q(ec< zqr1DiJKjB?-^-ueem7=kXD2>0E2|-7DwahYU?T#8!j~8r-J2r?riNy(>$`v#p>8G- z1RSpx1G0=jGoxq?yf~q`Dq{*T=*&)&KtkEd1D$5(eBcA!P_otzho`f>8I0i*pjvL6 zrceppsMyd|Yz)c;$O!^~*299-IgNhzsy?94oJ)CZ~-XC_1#j z$&85GFZ1bW&oq9hcJC7}NXp=o3IF&EqbRIqBPFN?KBpaM8wi{QhVw0o>>Y9a>l7;; z2LnR72g1ns>{uN;Ruy`@tV3qkJJIV>21*^fQh-q_sq`H$8#M!iu*ps>+40%JN!3lo z6Ffy{!|}Cs9jWpF?7`OK3D}F`tc=eCS3k550f<#j1oT3Xkg*oUMexr5oJDl z{3|&>G~9y&#)4rPGZ)VFC!*e#h6Rk!P-Sc!sq$zXJPe5X7EgH|?QI_Rt*2v4I%O#u zv@s7v-Tlhe_NXKXb=SA<+lP&RGO+20kX^-hYJeMT~}DRSOCek7KN zrMIur@#ruZSF-LlpQRqVqVy@{m2vmy*C|q8h@J+)i4sFMjc2ZL5TR2&B67OSmdIS- zMw#794~j38-dvyfu>zrc0uU8Z)x|r;@c}Rbo3^pK@dDv`X0xG{Tna}%S)()O@2Xck zMImI|!C-#q8hsYQd#lD>)@_TUHlIxd8ikbN%djeIq&5dyg8;rJ>`&aW_D6R&GW*FZ z49c$yl%m|!_o$VtH`lFBH3pREL9kmLi-a#SU>RC^PqdXAHONYS+?JxlT5Kb!Q$~Jt zW;=4p=A0Z1)SHI>Yc_auwf5OX@2C*oSv6?(v+1aW8_cephv$m)r7}Q{w8@C4u!Gh1 zjz_L^)-qC&6qW`N<7C7AhyaF0M_B@S!^Ed&?yrFWI7OIw?p=SMYbzq-h6;G@7y_M`LbjdP1DE?`{fWr@sNW ztm?dg&PMd9_JgeOa>l(XW9FAZ)qGA_g6axqk+vGlRqS^?(rdER01~21OjK{AFT?dc zoqB6>S;S{7Qn*3)SW9xzzs%>0R(Wu319)*h8qp=n*;w6*8&GhEy->2QFTGO~&A8wP z5YW)zi${P64h5V9Sd71Sb#k@HEfG-JDx)64$&-yP6|{spd32_7HSe8w9BTKCdp)Xneksn_ht zxw7u8`XktB!+g+rf#oEQT%a(~3mH3)tYy)?@A&(30b0I0NCQ4*;x;STS z)!fTbC@pYSxa`llkt=_P*JJwG;-vPMPnOboQ}a~czrGu5%6gP7zZUF&$p~;S9R9`E zmeBt|qZ`jyRNxlW#XlNtyB1nLShn$0{bN_*QwD44;<9wO`lA(-Tvc@lzph7xvb9nw zE}B0l^P$xsQ4I3d{F}s3@|<;$!2qB^~j%Z?I_jzOm4psoAziL6khIP zes$cNmflhcAos-8w*3eMW2|!@H1j2+ogZ-|h=C_Yjw@Qk+us-ICtYiVgJ>FW-Qo9} zdn@hqkKRiWpCjYXSDtNlfe8{xTkbqEm9QRQcd1P?On>d)3Z1|&U{pId~DI) z6Kp18grkrzl$T#NjczKmX3@rGUbg)X0F*dd!jYoZZu?wQ`c3T$*#IBpo|pA1t$Yet z$^2LrMnVL zNCI@S#?ehyiONk4+UnDMjZN6*%c_+9_@Yy*lgKyw4y|pPJX2upazv4~cetYc)OY8h zx-ko`kZ6_Y1gZewQWYCPsG6Vs1iYtm`v7xngUem@g7L7{O9F+ldTZ0cbT@CB zMsU_e8#dp`SPQ^)Xo5{_{gXECt$h}SEavZ&m+Z2moA%V~8@POa&k+6IY#v7FQIaIU zn-{cO`6j=4Q(UL!AQM9jqPlDu9x!laTB_ zJ@o9A?29zvro9Obhx4+<1K{-#=dPT#zw=tOcm!?3&8589NfsrXzp(oLH}~`&`mz>R zy9!Tx&DI~_Ju<#+Mf)p*KzhETPBhC|O^BvH2>YiNju=LIQ=5EaI$QK7v(EwX0~UXx zpgfz%J;H&t8Um&uqAx`Be-me601@4~dmqV4$l!KqTjcX)lJ@%WT`T9m)_l;+)af%` z6EmRL(U&rp>Hocj^JqK?cW05PrOvYY9;b)Ay_I{oSvnO@@{+7g>=;wb13IF;DI^%@Xb)2o)0Q{dPFPc+F5c zN9PynJgQDP;+{{NJz7Q`7zpm0@6Z+(gNFGHF>)GdgUpMft!kMyiF(hLm4&6^OKID* zDh0TN8E=*2nVFSYxxFf47A0;K!goV^JWpw`@fdFX&w+9mXSq}e?7o_^%u58JLhaZbekh($w zWO`z0;OOq>Vm8wJN>NA238z|+we`}W2=Er-}_*=KoP;s!A z_ukHIx5LMpDG-6}aZ=J$l1Q~aGw$zKBUc1F0A(`e4V0a_GBF92J$drU=eTCP+SEs9 z6$Eu(mh89ya3FAOfvU`HY>5TmQx@GPqkxv~xDHr^b9U9l;`r5|+5`D6y*KVQN#E19 z2bW&RIc2YmeqIf(4`Pjc*~mQzV}%=xU2z6FT24}=&9`v!B_g(iy?gHB?DX(th7$|F zJ$=iH(8}ljW}^HeUS{rxm*1=vSJ`Gx9VvM;Kkx45u=6AaP=1aW%~}9Czfu< z`;fKM_&{AB%j!AnOo0G;Y5&796QI0I)%>uTCQ2o{bgZ()xA$>d_PuF0ZhJ1e zE}7W)pz2mx|MXq+jF*-}`o4+BR;$PZStf%0LLHV?=f^H51?JsuAeG_S7q`@!pT-GY zM1!J@$@I+*YU)W$={DZx3Nffoh2MGUy_{Xrp8x8iNZ{(+`?;Ps+GX>%yip{n8pNU7 z#3uR%{IloFHYh2S+&C2+~NqKmKhG{t`1{h|# zX>PNG^d;%}r()Y#7{KSA09;AYjP613-?Y5Bcoh3Av@Q7LwKy4#1xuCYc2RB1OYoJh zSGZpEzRoas8PZ4Y>$h11B$7vv(8@Lekqr(#&`0(;ZLgILpd6OdmkPk z4CDOh9bUKGk(kHv-7yPE!tJgZa{J;mEjS{^AVcP((`y#8L$!S*&-lkXdq7&`1ethH< z+MLaLt;o8-vg21colLyBZAot4S_62NfDna&q754x%Q;n zOEheB7m+wKoyfS?OiP zSau+M=ZKuZ!xLNni6Dr%8~lxDx1mWP5e|v~c)n zMynyfU|F!%U$SGq#pic@GFq*o2nfNox|!Qb96d*iIMt27!~!{m$pf>nSs_*FeS?_i zOJw6m4%S%VUbZfgsXx9#N7UQ29Nb%~|}=uE(TkIPLJzk8C8%9Py-?xq_z1zA5z(@NZ* zJ$KXw4sFY$fqY72Dr_h)7~jP}9*Kq~@(Qbx%`-&#L7_Nun8IUdJPM!7eyov&{YuwKR+HSz6Cv{dfRYqPDNpG!a9 zx=gX^{@d@TWe}?r|5xETeS!QPPMfIjm>)dFQvj3jr`ji@tavra*_Y2jYGOA)Jx;yS66NpV|c6;Hb=51tr{FNP$`TPct74`~X4yTp*J1bWxd;&hLKmq~t4V zbATUev%+{c^~)URaQ~PLI=qj%z0rOxNg@qW^}c2_kL_K=HrqI0ojc$GgD7rl6@FZZ zJYdZMFpAF*SvPk}nYWI5$cJZfT4%HxJGr>NDidf>Uo-Vyc>J0Zue#hpMPQ7$~`}ujJC%-`aLDo00?&+P463j52?)W-)U_=_PalXKMeuKBB7TD>90Gmp6V{{X`6Qw zdZ2LLaq8UNm#PbGrs#+aU+dzz`YhkGjCWgi5bYFrxlN|JwE>j8-2f6&inG%}6a%x5 zi}s5q@banfbpptD(BcaGDQNpM}zyIp@`;e?2tOnlC< zn|r$_6NwI`noBUeN91ll%usX3K8%-7Cf6P5-8vT+j=j9?5SpT|b9OCMS!T6ZV^vcD z5$C597cx^v#ylvG2ZCGHE6EA9L)f-qg3&*$FjtWlWDoP*Wj^Tf=VrL#Fwpnu;uEY# zk<45$(a)t!UR<~I=;!rx*j!)7dj<%1w)n&ujh}YU>rdytms~CrHE23^8IcZeiQg*H zWPS9?(k$w=7$QY|X=XEX2zat0zQD3!hc0Z%1t8ge;LDHF<0=@m^8`Omn1rS|Qos)LQiGeJBExIK5(9i&l+-=T77bQT05E6bi z-Mfz#E@n}3m*3Rvks!Wr0tJaivmQ%@ipT)V9p+2VrUoi8$e+IRRt_b5ZVkQdsB;U( z#0UN7lh|5)T_N6^Tbf3x`?>bp<|+@r4Gi;(p`-c9bWz|t5Jx`$?F|z&tW5${6RP@Y z8gfUUb6c0(qQu18bAZ>X%o+f2#C*6bTu0|lNoNTB=U2M}O)dy-z^ZR_VrH#vj`90n zP42`}fzQQ`1|XXN3I{N5wif*{{e4@%|A0~mw#rHZO;d!LM!Q>8T>vUIO?)=nJqRcw zhaUD?#=(B7#W6(3gX-BHfz5DOxE>D;c$AA9_{HSWm@AxJwC~hwzmFl%pX3za2V9qs z8prW{y;SGJh+z@<2N=mq9nv5)2*hxkPw-vRiGHI>8K<+encYGW=qJnqr0)#X)dcW( zxeo3R< zU$&eu)NdfIzK&*(V1H2rndz+v-0lsR>LeFP7RZIN#-LY^+k+>S24<3*q@tq(FJzh| zQKWB6F*!j_*2le|f=qEpd3ej!>Wz|sdB1`htOqNu+k;D8Cj4H(7;ptRDc81BojO|s z{W@E&fxwvrq0(8tZyCMlBqQ@#RirQ;ur$w~`s+W|%LHEsAxF=Ju`>87Q!N7^0ysiX ze;-mR16-&wfC*T5V4tQhZ*Ie+)uw_>WZ+sp_Xgc5axyX~cc)Seqlos94SjIhhZqbW zj-OFbvF9K9s1&|~gCUZDS1;6H;^6A@3b?%s{}myE1Ud1-HFym_*s)B4!{wA zN8A_kE(&wVbefzKIy zw@jb=$?7)lLYUI@uRAl{_+!spJdEsNzLd4>?;W8By)eJjpVg5zy>fFv&YS)>s-JZI zAA|av`PZ9^dIAA9jbEoM%b6RVv+g#SV(^dwDFb*j@z@$CMnVuDx1{-S6>Da_Luj$Y`x`& zDqK0@UQQiG3^Gfbu-W9sTOFc?f6IadISS1yBsErv*4rdBn2b9Q>Q(~(cgNjTm#o{O zPTw=cIFNyygVMC25o3zn1OnFHUgtsCN&qB?yN&yhsH{b~UH-3;FAVJ~r_6ZC27UYa z@u2Lm_^Ao}4$5&B&pP*<&~(f$@V{1^Xqy|No&9>s zjSux4%%hfiVIUo8hly&UGwMI$6d}Sv?5-xYOO?6=7S$ta(@+eB$;4_Orb?}UUHEbW zgvTcXq z&+n5*!%Xefz??I$`u7`Tv0DJUZhUaA)7+tQ6>y)GOj&>C;D3EOitage2}GXpOC%>j zK?+e7wBdQA#J`$IkO7Lib?S4$sjGdRCgHunah!=eE@<-q|M>GTba|*hfplg|+(Bp5 za`HyJZZ}TmeBgh}3ACMEN81z#x~M5y99n{ZJ*pYc8~WdG5;W%`D{1d6?&QF(juE36 zW9Iz%H`QNe*ym!!(Azs0yAHEW*knWAi-w4;|7O}24i}lu*hF8 zw<$CJH(5-Xz3?u9Xqe|dS~3*i{oz85uY*GV_i2pB3;VJaqj{Gs6PcZ%ZHGW5>Ho+k zihVBqQQ<@YoE(6bYO>JLmmvIaJFsIsXzlw~#O8-}E8FYs*97+j|HrstZ36ff4nFfn zmB!1{DEeJ8Wtr}vH*T6|Q_OlG9}!XI$4PKH-VfZBGJA<#O)+bC&4nhR#=S8C-LZ(8 zS`|5=x49m7JB+=(a_pxKj1w>JCwXeVtSOT~e4{A#QrX>@HkOls|6QY}@3Jzf2+Zi| z%;k9?yi!xS^#09nf?@ddNq@^WAN2+{BQoUGn>rVobH${q2JjJ?J`<&AHy9+0;T4z< zNRU}p&itSt){RA^!!}=kG7|ij02A`wtoKwg;!`zwaZTjyZqt(@a2npf5@0=z-;>*Z z+1oA0p33{WYb4r-ZK3`fP60XOSJw(<$czlnS%2iG>YJa%mZZ6Wwyz?7v*GA5I?nz@ zR;g--k0kj_gQ`le==)^xkRX9$s;sL8CU9`e^ykQe8C?+4YQW>xw(-SDNJPVD_Po|frnLLr#o z{l>a@nse2=OH;`tD7Z^rNQ3Rnz&XkliHUc%W!DPq31w%vP_3@CMwQB6fj1d6cJ-xJ#S^^eNHML#Gi1roTXLSu z4_be$Klt4ikM5QBK4fFj-0pCgRbaaV7yAKt+c9Fdkleglm4bUi)D5)!vq)tQ%ky-- zP|lR3k^5u~*W&u({R@!q>1eq?-1A=8l{Q~5LewL@&|6{^_>t{#)pq?v4(J9+RX^mY z(I_2a0xukF!IP7YmJ01R_bop?R4767-ROPBoP&=wyC3+r0|!o^2C;MwIS|NWM^%MY zs$QZP;-7%428ixaUCevn<;vhLe_3K<@%O9m^+Us+^tiP#hXn{k!a-G`y`-)M^ylji z;w$l#Gou2xpzOm-JW^;7ROo*0KsSOC^2;~Ls@hc!^__Llw@B*31WIO(FcfLj>_1WS_P? z%-{rVN@HBhTTT=uB2PdaBw1i@ux}c-tV}}R+KI_Pd^5>UJI*5?*(Xi{ZdpJl?!k-) zxp04Tmi2T6H`0F07`v>r+g2e#9uMyH5V0Xvs~8B7V(hDJU}H8L2A6E-c0X`^aWZDc zfxLezd8%3)26TDhx2gR{aeFfRF9ozyjJ5=L2aGcG3n15qksR) ziTa2c7P~3r$SHn$@`j}RvFYqncTrIGhVCKc7b0g7c+w=453xjU9ybA4TRd9kqbICK=@aWA zzjlNKnRB0lzIcoBm&RrgMIaZ<4DZRV`%g6o?0JZLLNwok%aYewRUtqo2>dF)be3NK z4(-;<>_?U848t5(ikE}g84fFv5+=lVuTBY2+jsdgph4ys?w;4hre#*e9bo7j+JKUa z4A}sj1%CDcsk<#e>>##H|IV6|af$zJ`Pw3J4|+aW)oeCm#|Ac*Ch;O(!kw7#6WlY+ zQlF!U(|RbOLVQaHEx|$p&(5=Ii2H+cx|c``+twsYa)DhbVE7r>MepB@m&kr&CCU1o z^#E>+gb#``#5Ms)c?i&*Y5#4GXq|_OMyD(hDYWP>38&DffCf6SvJ3?TDSfH=qtjzq zxU@cAyB?z%JoH-<2*d%E5kvqPgW(7R*V+xGUJBG9%c;v!0vQWrRp#d1NdiJ@i1nAg zLhR{=ys83gXRZjlHan}+#4t(8j3|XpO!VCyDQXC${w<5(DA@BaPw6Zo$fi_TL4`VYRW0buSyh`HPH zBr%(1jvR*8rJzEdQ6eb%%h;?U<0mLnq6N@EGJsGbi(~zO&~`JeB)fz6N#A0_PSwnX zTGVAe zchFh#N-a_!i)_+?r=6=}l!9W~5Z~&9K5(5Su^;wwOUBT?84R6d9w`16K{*CgMVms^D9nC%a&QR&YX&{}<{AJkWE?8+4at0xMSmu_ z9Hw-|&Qv8;fml33_KYIH*rx^vyVt1_p%%9wcHsq5 zQPK%3awjE1i`nz_|H^!KO9)AgwK^KY_NR$yEC8F}1eixym{Rbin*df!V_KkRP=bZv zZUW_WXL4)CDDXDDO=_wM>K1CX%p@%>2e29-V1~AJ1eHL%G|p7HE{3g6sxT(LqbwJw z3qKLFLOnNMsue2NV!`;qes^_!4^0=U723UC$H?6p@*lyFBR#dOa@@1}&|_p5l~mym z%fH%_Wt!2N&htqxZ)U6mpw749Ezry)I#B*^ns;;7by^3%G8$OD!cfFqeJtbqLfJX) zqJ3CdtLsj#eTZrA&sfq2pGld{lQV{UQwP2b4c&QH@}q^DK6^H@^7>jD)Kh6#l__$! z_oBO@@;FDDSFO9ZyOBzE%x}o`P&9cW{D}A~l%7#r)yS1_MqlZ2S={a!wwO{~ZF~FZ z_u5(w@@JxAirRsHRWpKj-bm}zpIaaI2e+z`5GCcAdd`(;=%e6n^0d&&IW)Va&Ep&O z)r?KY3(2+VLWDb?fn9d0e3`m$>dgOVB62xrKDi}bFDdTOGm3s{7Eu4i9y68Qb(WaU zdrWx1%x^VnINl?^EpcP-;rTEn_g0uXDX+j#v`qoi0$~qGBx8P3(N^1anbyFFd(WD}u zMgOakr+tPngP$GW$e za(Z-58(dxLf6Gw&z=?8$E{$RP0|{p(KD7*=gJhTepAH;&jwq>`5J$P`RZjTjt1$^6 zSoXyFi6DwkTX6enxI9TL`r!|IB7%;&+C4~%=lkg2++N5tJbY$I>7H8aW#_kHXAS|q zeebcUkPn|*9GkT!Ez)bQADiJS;$8aJz(+}^?RS^5SPA13e2Rl2rtkH9JL4itI-u&1h-_(g#;&#&D zpYHxB&!r5cnhq`?U+YDntMi6_-jIE^Ouod1$yibfJiOG4PU4G8CX%tF)Yd*-8q4m7 zI>%2hz$~v_Eh8Md6&Fh*rn{N8;nGSPbNsP`+|V(y^>!?DnYB}N$iUq{V%~^M=bOSe z=NeNyrth7(NY9fEd{uEOdClfC%V+DuP8N6<$Bl7nA9@YZ(7)Pd~a8e&q9h? zUrwgZe8&2@bZ~Y>f;qg&cfI!hxLw6n?g6BVsuk=zE?C%ygHa0QDYt86H_bBF>KHcH zC>WXEv33<|2-~ZS{DmUotUrd=4Vziab3JBelJI$Z%TOj`uQaAK7HJpqk59{o zT`q@HnGuHq6(z7)D3XRvzWf8m7Lr-naKM+~e`T1wK>8Kl)0X z)C(KAZcXSNig0Ers}R~StYq~n7RFm|*vx*T^rOj}^Z9u7cvkXSJ5Go|TM#gKbt8T} zf)Kn=rT74+vUXT+1W`sUE-c1Z!yjAaX2Xzi)8$0SGa#NLhrBA*Vn*x51Yj@2D{eyN@{JguTDvKnd)>~GG(d{mL$&r)nC)P$?qHFQa?>GNe$y2a%x=r2ooA`LG!!oy%Z9qo- z*$sV?gJH}m!#8>57U00N9@kIEaHp-#uhO+cM7-fG$7j@1Q)oKg z{lhY*O^%;(2T@u^fQ zX|`fNh}5K}_LuKrCJciv5TUKgY%m}}=!0M;)hS5%_tPep z{><<0Ouv|WeeJyn%$fZ@jsJQ}i?*n-x+gwlIHvDB#-wKD&5)IcRozuCz06=~AoFyz zG%)zh@PV?0psPDx^^KHkNW9yh?LozZ;VzG-*Ysz^f7bH0Gr*QpMoU?EtzQZE^x)8;$U%Z=pKG{LHwZ>5rDR996x+=!v84`Y_5ta*o<^%SPn-)P{$lLPAcu!02L!@i> zAHyf4(wz@uQD>ifq%MZ;rE>^C+YG|1+tUWGQX_JHto5x*@iyVe+3JwJCTu8AU#%As(@ zuxi#HzrFTnN|}ZK4By6_vqxZ_ZaU{j;NLp^4vh9QltxP}J`GisF#Pu}G~-8DXFav2 zp%=1%?q;&t9I@rz>@PdTI3N(8fJ8u!Fs{p;xAnrF`{}sL`MU3_61hNjq>1y!9q;}qi|?7 zw7{10*X;++hTka>{d;|eJW-)~yd14VF0!}*z&s!TS4Eeuu7Sbg>iX{RF&EpXl^>&#luU^Q^Z1! z2_I`GOkLJ1Qo7pxK4O`JPi(TE0E3$v@L1w=%g;ofJPh-9y6FOWN22V%tZ**J+{@p8 z6od!s7Rw~HxSh0oLd+2g-vTHogwRjX1A!=^d4xipiF7-?yj<E_@2gg{@jZeAH{mV+NmG{Rp_U56x%#4fL zc(aW|)`|Vt;t&X8DCpHsVd&be!1&$NFH03eEYMhN>Z38BeO_1OH)cv2eYZj)gAOD_}LDePeg zf)yPL3z_^BF6_ClyZm+={`*?YAv^U`m!_OlrK``$TQEwR(;u=y7!Uv|XaQTi69&4M zch?u_m2dYG;%2d-=HN6)sNU7yHm4^}F3B%}|0Tig@wm zb4TK>8Hr8KWz68va2Wli8xYmLqIJi_!WCfUGPw$Ji@3i%f9vH`={1QTK!E;>;W^+M zlB*H}_)A8lN3VU@Iq58pJBvM4vQ8cTifM*ujD3@QJ|W#1&=?NXFdw4y%b`gRl|qS5 zjQuUz9_1hQZoVbg-yjUC-51~Q*M|zD##ubF4)BXm7E}=@3lr$c>FeAp%4qbzSq;>t z=9`xw0gAP`k%aF)hn^O1`%OHV3Rt?n%meCj-_;=&&k!S07QUl?ZT-SsKIkql3- zqYZ|}!UP0evIn$;Z(hhXi4gie2Ei4N%Y^Xsc%un-C3vXT$DkSbCHEtyzljSxx@x3>l8`;4@_%PS1mFZ1H&hK5X$)g2`EMH$OsCH?M<@mCnNF zMNderV_^>_nKei|JvN+3(0RSKjTOWu*5(|)N_0ZB0}vZ0Hgz<~HNoaDEzELOTvXl6 zVboGs_gnro`^I)*%bwzh3nm{yUGyqrHjkGCU4{G=^Z#Q3u9v!l)I7pJq&A6_eGktX z=lGlmNSubts2M%B-?7kr+_B1HS+snL`^(@TTffdFJLki`E?U; z%J>Lq(>lTY*)K9}WtxBL6S8&i8f?(H>rv1LFeL&SROmilp$TP4B2TN?32rq@`wh86 zN9^ENw}jXJj2YRhmZg==PThhka&~O(zQdryTq9($jp2eoxMtb?GiSxQ>iW1K?^2aN zXxGT@@2Jq<{SGM^w{3?$v=0WRmAvkm7&YJH2NyEn3V(*b$e^F4qOi}Sy7b4sW~A5c zE=+*y_Ojow{{Rob7A%2P@$E1o`7Wug7TUwNKDMAR)*^$$1fi2eTr3|*O$Ztj z1?mL8e;SH7UmwbEJ-&~!vFBHpZovWe4w6ey2-31BbPkCU06)bAkfjW5l*2(6T z3tR%2>U5%r@fTgQlcfmd8O_H-ivEn1rg^vj3*b0MqNu5X|#4l>l9Y z;iXEzS#Bh;BI*roVaq#BQ_(SC*H(x|VK_B)&GX;H(O=*Atki6eqlFBHQpS@6m}~l$ zjE{a+E{J}6s-X8i%==?xCQ^1aYf08qi2PRs2mV@Er*4|`4w6pM6qS?F9&7zOeC6X( zR!mg+*hD6sbXrd1X}ht4#MDyX(V*vna@6?BPH%L&T0cH!NwuYRX4P=m&-w7*U*cKV z57J)A%vE0%Q_V9tb+^RZMwD&XY59&|FtatM@8kQuZ;!JNk`2~u^fFiD&4t*~Aoly= z+6iRyN33+pMBnPDOWo3ZIkYgAvTq@`zd(3BEWV3!E~&v>_td`J>vz*7f4QwTI6Yki zNjt{7{NE-&vKUd^I)~BP=;)2zIsGiTOK-g4Ps#in^}hXSt|`ekGN8+F@vLE{pXdg# zb3I=0{x@Qtu#KgA* z-EPHa{Q>HQlAc0{(|+uf(4_~90)4HRm3bvWZO&W`18sP+_xK^&QP zI&Ha}p+3GJV?@*YY8IE@ZSgjUSe=j^ctko6GLRtyRoUKio{X7O1~*@O-2%w$I3<^^J@bRe+>9A|t73CdV!B%twO(Bd2n&z@RETMs;3 zd_CCY8HnabRHq%w$-o@>2Et7V#O=m=Kv3__4^(m7c8WjlZY!`8^KRi9YD<%Q4YJAy zN3%kqJJQT^)t7|MotLJ&y1z?F%&2sP%5Xy|FPzp{g)JTnJr#^%{Qu6@K6GFf*?tsn zAj>l;E8ILd{#G3g6N%U8o3{|EEmudY)awDK&B$jz$)7)XU{77p;Kf+8mWYbF*-j#@;UYgBE^wxT`#wmQv+ZK~Uc) zw3X3RU!X6=a zx#)Cf$8@VIOgM0#_#kMy+Sg8+>g2SqlOOuE6(^}*OHO{4*d8gMP_`jSQ=8JCU`~N@-bX-o=o{mxp@?5Bnxr zLMgeK16ZslGmS@?+Ry0lU6NvPW#8U{x*0NTy1<4^inguZU+E*h0;ahnP*H?YHrb+S z#pq%x>#RSWO_8>zwT!${oqDt}dN+AC19jScGo72;ZS7rpPI>JYe98ae8Pjx2X|#Lo zJ3a!uT4APQ^O#nvSfpQg;G7M-y$VDBh&CuyoTkPvjWu2loB@Y7BpxZ1dSlJ{OBDmg z@O;9vYpm+xMk^t#O{MlhEW6rU|2MYyK#z*xY4Z>M zzYt*&AZRIk!QyJ%)eqsHnwxv?8Hgj1ZTgeu+~U2~Yeq}bcB7l}IvAmHQ>RuO3Qy=e zQle1gF12EFIINA)VYbNFF@4CI@1<0MqN|?^> z7-iDdLz-ZQS0Gh6w+=4s10la9U9I&i%<=kIdwSP@oz7_Wo9Z5~=P&N}-J0w^Rna*`!*3djm-)e*c zOGc7?l6CxunVJ{xr{y{3!XLrr#c!Jnc~h8-NF1Sja8rBfB%y%+wEUTZ{x0E7VTYMz z#ck()NW|$~fqx+u2h?2k0i%gr16Pj(31pWBXqa4EY>c)|%fF_+Eh}C-_uG~XyzqBr zZy*a$EQsEbI9zn%VebhjN-SfS$5uGZ!u6Fv%p@6EY)kxaz#Xe%@G>dvQpAqkSl9yV zDHY=*mij`JhQsc15iMf%kh1iQ4=k!yN?TqZ++vXiUV;;28#52X?ahADseSC;y^}O# zg8o_k5aW;lcwY1wZh3)|Bn>v6`b#{sIZ>Uyf=G4f6>$;A!rnkIaQ1+uA#Ti>mEzZL z(*fZ~t8WTLOnzpJvZOspTemvVbX+;_pl`Y?Y^6kIK|dW#AXnME>j1=wONw{5d8+om z&z}`n#=qO!2OPb8akx7)85fjHkE~7GoL=^>LFnd6=)TUUHve*Sh?0n6-;y9dGfx~{ zSDB(kHk(v28%Vgr4;UG58)=bLcf=dCTu$Lit?)>j(+J@51;qFfAx&;L!8M#X>NnP~ zl-MQPNxkfP-^(>Id?o9qfT400N9SP`e(_OcP8#gl8|NVfenZD+U}1rLuY_}-^dCD> zvffB0SW7r5%*F&YO!qbOB(yqy^X5XXoISQf2vTPtoB+*u&Z}_;t=|1QrMu<>deNBa zUw3VCN%Xu|7Wzzh+jG!dAB++DbH95G9a$2H)nY1YhVtz`rr!PaJc`&>?|2jO_ZX7~ z{-oLPh4%2A1*XQ(@k-UDr)@B{?~h7e1kh>l<1@?_uiTB!k6v#S4@J`yMzO@M#!TDG z1@Mu*DL@{{I~H!~E;qeUL=_eVharylI&VRX?#d=pDlq#ud>N%-Qh6T(pVwrzCK<4N zZbz!8_BEZ*ZN8`T;ND{}|GJw*d1W*H;!zvh6W;BUXi6e;xD!2m{XSmuo~R(|6wEv~ ziYtTq3)B}qvGkR6kUCTcErKnf%$-{LTqINr-GCWbfHpFe3zB$x1@jZ{O@5Mx>O;*q zHaW}}z6`w0cW^jfS=@tVa6c4R{RW~dfBvqAZp=Rz1%&%%elqy5%75QkIGpR83t39$ zYly;nN6z_I;o~nH-M|s1u;NsM<^4+6l{DeZzP?9;G(h1S%B}XD(Kaz58}{@T1G4t& zktuKL4+d)s{c+&#HbIknX{!`1jzdmVr?U;{qiQnu2w$Y|K?V@TzEjPG0!~X|7XBtc zzd{ciN*=zSlseb-@8;QF1qMrXtg26lC;C-5mf-AyOm7c7osnJXaaBjaJt{r}@m226 z!}M~&`kG|gtYqRxYTsBj<&g#A7~B+CiN&7r2rT692?hQNERi7XajMgTz=yRUOTCmu z_(>LJb>{GMaYbg8)5wT+tUikLHKkyoua|3ut&V?v$DW?5JLeJnai-?_;9RK4604Az0=#i;z^)aLjobGcmpilH4`tE}ScU)y7FDpOF4 zhMa!f8<|WZZ2?=O;44GZX#?(0w(nI<|3}nUMpfB$(H>elB}70#x}>{Hy1N9VyE{}G zrMsm;x*L>~?v(ECuDf}^d&fP0&vfl7|+?D2%wA*<&>T_Ul0jq;nB7I?i6q~C3_pyGiIsM_Tms8lwjSvuD0Gf2K1 z;>yS!^Cyv}8PpBo!y^4^bXoj19Hgb|U4#I0q9M^nu)en`+Uuj& zWM)$pn7`bAGDQwIu728CDAxSp__=cWZZp^WzUC%<_*3_(=Iu1az1(uU78xx%C-<}R z?+DLR*+f0c-hHkw;m8*xcq~5_iJ@Hn8&I6Bw+d;2C91+H3wkB0z(gJR~E(oogfCyBXwBV@9ffzQC)> zgF2WlJD$<)A8z(o;yj<=N%VUBn?;6Tp6Vz2(rPOlmx8XP#P@ND?RfbgiB1U+{D>!RpND2zE2H`7DODGcaIA5F*L*Q zi5-Yi!dCM>h6xd#t+)3Yp$Tr~0#WgoCKTUDjl+rg(tH=wX7PKOoQirXwNnnafLV|x zNVniMmtUiz3Su$$F2~*4H_2|ESi`42r0;)!kWOiY6Y_bHvLEsYSyg>Qr*C}QOnXIj zS?T6)5o+^>EFw7L%4O_m(I0iG!ecKuJVSs}DgX5Q14&V(Nn34($Vxzy0oFT-7R<#l zPU~qMy!V<*BrYGU2FzZggvm-7-h&8HHTLk8iEB8<*4e-OnVEK-l%}Uv7NwT@-;e9e z1+uEI*~}h&q@ceH>}t)JseY0Ij`b2MXw{GmAfLsqPHltJ#E{QQ`4Y!@gHtcslM4D|Xb+%<_6hV%9#?;%+PS<+Wy2SvX@@@ShGcDq7_=4RR5LARQlyiN z46DqTea7Hlg2Wq{B4dR z>3lBnY5nXfxdbcpWUCbwMN{poGEKPEqs52XL|@bkw;{P#s#4Rux_QE=pQGkmv~_5p zF{E>CTVyd%aF$_HyPisW3S8}4ZhnbfD2Xa@?it4=D_osPRc{lppe#EFafRSycY9=u z?H^L}sblN8A2ZXPC(is9FU?KUEk+CHnCo(%d>NmO^x$ui+dqti#yu7q4rH3DBg-EQ zzm}>q4|hv>YrtFPCphhRWJ8J)BU|dL7B45B=Sw#|qf#AtgmqE=^WKD{=53{yNv*rh zCk|F!`Nbvyau^wEm@I*pH}ie^_BBcID37O|UeB#tK+>^Rs<-oQv~%aoanK6K^8x@8 zXwsX{yHD1S;0~4bS!H8zl@UH+o9(+sx2M_lm&~+b8{aOXSwuxoRi|C&OJ~d(mC>5_ z-yCI%UlH&>)x(|p*>ZoLua<7)tv}RLA=l@&WBM{cv&YX8l_%!^=9rfnD$5$^2d`Qq zZL#CI9tml8gGNpo)Oi#`^-Bg>@vqvbrxUQty% z_Ft~Lz<_+slv&Mb1%x#{jb+{jl2+N0BaAsIF+O^Z9DIk}? zw!(?1$m{%k(Zbhp;2tve9QN}~d$qUc>Yscb-oOr$G-e*f@9zBCe={9*g86WNPTcPu z0Ap{^zrX9wO=e`5o9*4;sf?Qf;s-Mytz8t2l5kpr5Nu7B&5DqNU6Vr>mv{WSPa@l2 ze2?*YcU2ZEoyZ{9S;CB2pV~+>$f}yJS!kgesd;A`%s0=)aiIO|w`zsgdKA{wD$_oV zKCbckbVPNEh<##%ZqM}61t>SI0I3j0_(WJx+=i+=l5DjcN@^edrANE@ zW;gZ>k@3lFXsUaN0GN~unS#b6WA>ZtglNfEpVd_^R3^u*@th=x;u3mum43w19znO@=bhJ&?y1yVw@eToQVc)FWqcuQerBEhcB4i*VMAhUQaGoGD@B8qTkVFnK zGmjGu)HR%?RhJ=K4k(kEVP61LNHRx@0plAh9qQunO|z-+NIR|?$HCyr3p?)6IMruP zAYn&}81jDeWr2t`=r4VHpai#1e9(;**c{IUPnI?KT%@W~oa#Qku*s+bJo~_kM%3jQ z&G>FPrfNr0d_C;e?EuMwe*-*-P7(CRr)6|@rD8z{h`Rb)K z@zjR-e*4KUJh{YQ3UZQEDV)_1Yhl%lpL%^wK&!A<_O*1%;r}6>cGGJy^ZV(Z1||y) z1f+5Lsrp1=yWglQA@&K)pY4iUYakXl3}D$TG9ivm4wHj`QOfZh$KSgx@%yCJap9}a zs*=|P1k_bClbY!RUj(RuHKT~1Ov67)4aZ0q-2P_BF&i7qLa@`JAygXD-wQMdr5S=T41ZVu7k1~(DJfb1X zv}+j3KhayV$@Bj5npNgYY;Xqp;g{Ix1Z+fCDxPSPCN#7xB!8s;+k(%iL*Y-Xol&=d zXWDchdt-*c(F$Kq_;4>y=enI%7xiM&030s<3ij?Tq7EJU6*1tM4=pEQ(v+fz_|>xi z$f#`tAGM<7lS@jAx*hMuoGTm*sy^la)V;84BzmZT6fW(d?! zh8ThPfGb#u-EMNA?gAgIX|aPNFx6{v`>@91Qz=*6)de)(@6qWX>YG)P_F0;?O9v^jx1F?T5HmCO13VxI-%xntJU#Mo67Tz;|9qX({0YWNc>;H#_L zs*Oo3>SFG1))iaE71{-Q6CBca7hA+6Ns3X-v%2Xa*B0)4j5kidp9901+vKWV-BqWt z6nRvzbyBRpH3)-x_WJ1lt>NxfWB@Jt;iK=y;%#nR`ASOUh$Dw{V zuA<$kq(HG^Y5@+JY#z!y#@LVypBBjL5VNP?hdWahanCb0exD*l3-yp+)*@bV@3xCV zdy3$d?d74N0x#6~-ztQR0`uOs-oy6xA3s6M@x)hs4TP{@H|JU|eXCd6^TC%}BkK&? z%a|e*{>#=kS%Vu~Cn@K=n2W9Tig5q0iOGq#ct;$vmPQ{5CXd%>FxL`pPcO_Y{HB$v zjp>zJp z54#`lsv9EJim%m}Kt!g)5C|sGqCdw=$Krc_XQ-#S82%!_<<$SMDl-dRUsP^MP`74b zT3eW$m>HZ@!wsNzvKT5iyh3}ehE4l|A+<_~=8HY({USucPDNcx!DP)IN^n9yU#6dJ zhWyW>dm<3|S9mH-fHZ*Jy`GTq)HE8oAT{3=dbEATv3#mV_63=()ZOk|pS}ufsPtPG z>g;Ata2CVB8NgRyNSs)O{5h>-o`Ty7S!Yuj=u_(;ob=k4?#OV>r zo4VC+RZ6O>g`%r^NpsNDuLXyp0v-P0NFfXbGCVg~w?D?8S0926ts%KPX!{=b_`B15j594fdl^U!xx#$lYBVI&mG;j2hVB%y*>Ao6TK0^bjz#s&xM zgC~hi5W_gz{G;G>NM0mv2tb0xdS(tcjh|M?M1mN7eYt1etbWX56zMF-xXIhgjMk?` z7NnmdFDg2!FLjzk1Iud=*2$=9{zcK>bp~awpwg-byq6A~@i*~)NlzGRkhsv7RcNh0 zUr={4l=djZJs?Xzwo>La>7M{8Y~Le5@-b) z9W6F~9^QBf;`s;iWo#DUuB#9*KQxkR9L+Qma&q-nNtb@@yGe2+1%$_C+Yc-V(tZA; z3xQJnbQv#D4RCvZ`dK~u`l`3CD^a(3R4ICPCQv+TeasfVXrolTAjV8e=IgL)`E=Wb zXyT*AJyRDjEo2ES|MrIch4;uKUgDJzBeGS41Rfu(_P1ZF!laP;gpm2_F{b^z)V>J@ z2TZyfJQ0R9^VJmD%`?+=PlS2*gyLH=-A2=JbCPBqeDMtvZ5ykdS1mS!{EY@4I*%hvTyL*21 z7eYBvt_u!)5ySpcZ5;|}Eqkx#@b2bqMuYKJtfv;&oG=D_e_Fk3uix1awD{8p~bv zu+zBY6>;U4%>H4a?P>_iO3%Y71q)fXsoi0$ib^v(%lXk31vi$X0HdZ*6JW2lyrwBx zJK>6_H7Cff%8CetDa3yMj!b6o#{%1qG@-goMpT1eHyKDt@Hrip+WS^%u!;(Kc28?H zh<`>(_u04PqAV>-d^6_mJ~_{(<&m6^242-_I~teG;s0R)IMsZLzEZ~l2&5u$n-f%^ zMez(Eabe<^ab?vwB_xov#`<B-uBW2&ya5O78E#m1KT+W#TqZ3B0k`={YFC3 zc`7ki_!gD0$CqBUzjKzuO^!Za-F7fK$7B&s!ZWBoBHg-rbU=|$yl1-)@Z36yR#9&L z!7q-ugcs~Myy|vEtv-*rkE={B9$*uRgNOi*L?^!v4tI4PJBZFBRF|7lqUrA#VrI!6NWfAL+VSf^*Sa4@5ca3@ppzaj?6nPlJ-js9HHI4(O|e<~_qa1? z^-XNKQkVn4NNVhD1S1SHm!b#MN08Vy#$GXB6n$L!vi@qn_PA>YC^U6UErOYrLhNUy zm;7JXP_!9^#au9XF`=XZ&W-weLLw&jTQt`ekn0k9JS9_rCXraOJ|p62bN!!Z)ADSV zY;Wr5#FFRTFG(a^j>(#nA9)`*eX-1Vjh!*7wd}vwNEs@x82Q|XJzArps`b@RyB+&l z_$SCOZmRLW8q} z)U9ABWSByDIDUYev!oOI3*qeBy89eYMuKVc?2Tda1sBgfMOk?{*M8GG3wT-6Pc=_T zb4=$?882%U6SKwCQ!ZQes?J4%E`ZW2xd0pi-}5yet>(7U{CA2+=)P9z&O58_*3)a3 z8hGplC-gwRH{%L@0p2{ZW+P1I0J(T)BSU*``+{|g3BNa+C0x${fYSOp~!54 zs`C557gGbpk3A&|uAtv@ChYwH9`;$)bc?zOd)CL>_M?xJdgg4_MYPso>stSNri4ro z-mJ5|sZ)iM*w2eJpIg6RW1FrEaXC)lAyJF~1}EnBBY%KI1DD=3u>W}pw|xZ!A}xg+ z_I_!Lfmn`{C>HAT*nb1=J%1T6a6duE6I)LnpzA-|KN@C0fx>R`A>hIrcI(yJ0sPC-0z4yi6kE0UuNh*D)xQ~_)S^74-KnYrx{R`);G5My zCQT@3oC*W`w#Xppg|8UrZKW5ZUf$}C4xU=o7i}h<$Qs4@?bjOj!$IpW^T5YNOJJ-8Eu~bm1q=0L~+}hCG?eN}QI9Q5rWQMP` zA(&!*O^*4UOvE)`cHZBN+(u45uQ$V{7f_oof(BIY5H%~+Z}Mx%pn~9(A1A%DkO=Qy z*%$W1b>_-^hrSOmZkogRuRnqoOt|Hww41lG`B4`MJpR)~18HZq%BL#wLXsI2n|>;Y zOY>`+2Ou|AE^#sGyZv3qRE4fm$(Sf?gl`1=7S9i7{*Eu=K5sRdK^$yZ%;JMqiw*Fp zu=V*{>v1qByG<0Ff@hDLhBvpdf`Zh|DxHZOof_~lvJ?Y(_kG?5egSKCa7jR5?rUwpsq^vc7b79uey5Zg+&brxIo;k47$gaB&{S84;ditC z&M`o(-Wg>;DaHWj-&)ZcaRNAVUva4=K4$-Zn0>NMSR&t9{~A$HYa#}gmmlO;VLkEx z`Z8#>hKXAk%~->5zW_7L5eRj6vL7uXy}ny_b$THzkE)iuaZ*AV zr@?ID<-&cpn<3}8?U_;MWpJRjI*m~4Ln|LC!C>$-ufd)JWF zsY*OO&61rqRO^H9A~g4M(gsF~kAHRVUqznKf4(4f8;=m}Gzc~AB`uVg34a(GiMy=n zx;XubZ{OUb?d89o6;CMw#htpx!@B2R^)m1}HbN;XDY<8X$(|aTix2?w;KG%U;1?YQxV`Y!LG95_tCo0UI-^r8S)47FzHtz*z&x_4)VWjD(l4-51v>#b4>k4EMf;hXWxn z{V%5Pza{pmZL5>2^RGsTI`7Z{ytj+ot7)FB*$cZy=ohf{Vw}N4Uw~!K`D1=}4y*e3 zKE?c{bFfvbRT?v}JZRd@e+SUNGk>_*u?LvKu1?-2?FSKlMC~k(&W!!s=F@fuYxEzk zW8()0CfNXK?o-`fZrB0!`o({DR8QHcow^t^0&LZq7xNC$Du&@}Y5JF(;RcRA`k@P? zFba&|efH_PLSIvv(~O{t{s&5ULPe;T-oaaA^c-1B&h9#QDa;&%B|$>8`M@jE#c;lk zsO=9PU0@%iObi({&VOWHQATCIqOLVCzPe_=u!2(k3T%TfL<7ZaGxZeI>X{>bIpioF z??t|+5Q&aLr=?v2nnKe+fuu5GgRJq7JF~ijmjKkamyZtr_tA%*sNTNL#WG)RX%rBP z!iHrK`lb5-0)YRj#O@k8DA+LSE(V_35c)Wm-_z(rPv{EV5O zgVj;A3qXIjyD`KGNlOAaFgD|gRvIp|$_8`eoOV`3FsOyF;8tS+cvT_YII*!tjeHQ z?$@x**vc7;GWtYWB43w0@dJ!`K})g?vy|{$>1}++?qb7^OEH>`tB;0{Kn_ab4tqF= z)>b_3)9yy)|CAX<{cdQbR71ifh<@5p$5^+4{P0Q}g15&@`SIG^7045^@>MSOj6F1+ zK@f06VCnbw1pXpgrRbUWzyY*a23kN4n_@jN#{;?-vxg*}PEB`e7MwZ4Pbv6wlqqpV9#M)-8g%nvTF8MaCOeRR?lQY+VrgD6IVpa$yH=;jtYU{W8x>)&9YJl?Sc zL3WH9$l(AFaPZjzDv~O6X|uyvxMQvU0&pW66Br7a{O{{b!O|!m^o(E814AaJ$hr(3 z{+aHh^WAGiE5E0Q{Zmo!Ns3c1RKkX+`eKb%azS8)1vF81T&x0`&;zpN_1`&FIwUuC z+0p(>_@13}e3ynWfNoE!pqyK~p%6x1F$Lj+q)fnT-1YE)ba#Z>>0T70>WsUVmzerP z)wY@{K?~{mKG`*zCSbvJ<2w>KG>Cv#Z0)KW5o|+m)sv4@_R(1Zc@b>HfC`f;L7nPr zj>WY4(g-Mc)gW;0l7TunGpL-MrD7x1e4SnkOb)IcP1s@N&#`&$Zcvk>9b>6g_PLl4HWglP?a8 zq(`w6fztSuiCXA)#{Y$79xtQ}98q|l>QN?R5{z8@GIsDWK!x+}5MO5=c*bd= z`j#t_)1B)XKYr4}3vq_Coy_Pjus|KU4faYQ8Qe4CevZHh`#D93cNxfFp@mF~uPIde->MZ{ z9w&tM%qNUlwh#9mEj1I^JZz%#Dv%y6&dyCAlKFkb6`Cu_G40&}L9Ya9^QZ!0jhy^_ z?hB{ILyG%11fZE?K^mwmeJgz?53R!){JBLhJn|p0FMxqq;K1s>zN9dRQq2ZP*d{8+ z<^`V1&sx!@w%QjwKY=xA*_3aCTvPcIDr;_r0W9Rq{ESRCQLv$9TTdUc{-E`5MBvWs z`Zmp2cY?zNMq&b;bNK>paJIM(RzgAjaKq)5GE=k@Gdz@HK+xX@y$96C!X!5Ksk&Up zpp#r`cV%M{_Qsq)M*6_#2zux=Fzi5)$Metd^AVQdGV9l%}tAS4w zB)!|3Ue8dxJ6Gjl%PkWE`43fal$fVBUO=Uy<{jER_e9W+H?h!5P~(o^4zBUMQ7d5b z+5#?~sz{)(jzn%k#pXW{Z`KrnEd=zDET~^*MV^E^YXIg;@tx&8V8^A)1oO2zgDsUn zZQxbCo^7+`u%IEGT(36~D>jd625)07(7{k>vmC*RfG_cE)SOHJaW}#pbJhhx(U2F- zW(hT&A;JKw2J!|k#{7dIa46ZXz*3?Y1}^n_EGQQRSPocY z7lb6WTKqoYAsy}kp^x9r5}F0q;c(eN-|>*WXua#|0O?lz&?@a&XZ?_6UECUY3R?r1 zIipb~6G&#iF2+m7a7o}bzvZS&_Ato2H-|b&6r?)R%_wZ%%cG|Kspv@(N!j~TX*JPM zU$9`X%N%-MXcpQ$B+RUSHruO`JbD@coT+RL(u78#v!;}(f_90Q#-?!Jnq}c58J|qk z_^QkNZR;rl!kab8UjoSPH#y^T3~<0Xf<&Nq7y|fCN7~VBAHha(i32rZFp}iJ^nW+# zohM29+I_8cl8Usk?tciYA4>ZN3FhIPs2J>j1P4|AHS#g59G4Dcy@ANbx=bG6FP0)> z6$Xgld-KJ$VUO*ol_z|N@(+6Hv*u##YoKLwK3L*G`_T|yjrUSAknuuT|GFw&lS%$% z^D5`I(MY4dv~jb#k3Sr&WKsz}{?{cc!03fi3r;qCl$*f;?)`bHs6@8f=JQlBodW^^ zQ(CAF6T6^Gp8aPNN*%@rN&N08=GMq;uev0SR%KZ*Y%4(n5CxW#$;Zj%MPsII%9T?( z_?T;HyxlX`go+4c-Y9VifFI=%KDwgBN6~!y+L_S$%gS`ay37~@>Ia~FuvxAnm}GzQ zYE4Pm+{)M8N$l+_o;cCSwl>ql69Y@Gm+s~ja+Gl#YeSqdDPr65z`ys}bC3C#K8?H? zEY#sk3kRtMNNViO6ckf_*g3@lqQv1CFvp1G=1?oRQrq{*1&J3xgEg(oO#_UH*WB#6 zdyc#cEBvv#u=-AOlP&!9K{AZRQWUkV_k z4y%HH-~myBMPy|>nv*_*XBBU+sE`PObA%yN$+aT-H%!gLyf`A{BVaJU2f_Ct#XFas z-GyxP2+j$kN$Ui3s3-~=djg!9#7yQRT#VI!;msd~3#^|2L?|>GrshWlT!#|!RApL7 z8v_!@!<0FVy%Zo2ex_XyYIH~@_9Tk8Cx3p`>P)TJe`{-?&Q9(oLj;5HLVfek0UBAz zu_-Q%FE^%-PFxW%A9sJBkwU6)$A;(*!RH*bs{%-=K*4r2RpQBk2iLxXgx+_jf_hdd z^XG;Y;6`*vE=lMw#ffPPuz{|@SQP_6%y>wY_(v9oE!N5YV1fum+Wv3U+G;l+jiD?k zp!?aEFdeHsIA-UWJu-xRQN?)qC@|ViB*& z|4Z!eKxo)V3!^zdc@YXfBl4$P3agcZmYkZtZJ8l*hZKJe}t?|b?MN-nf64|~j}C?xI;bZ*{rHFlu+HIEmqk@~=R zFH+JJ^a?fTnCgYPO~FK!;zlVG`uX@E^{Y5Ie={JpiZuI5*{HA{!o5m}>jufsM4$Eq z006HB`)O7Ic?%x4m4LXmw4K3rIG=v%ex1AT^5I0my0lP2zggavn zGDk^(W0sc&lLP1lA5ylz9q;-l0Y@8vv;-rhA&N~m)jkt|ULa38FXcaIR`CK=;jP_w_XK!zO<<#a1d6GN4|r z;Df%NVrtUNf#W(%SKfKcMOcPF_?Fqg0tX<8z-Y24!{PrCrAQL!GQAUNHz?aS)luel z>pi1|N9P zQ+v%eG_a^_wg{rwK961|)2{(aCP&u41w4w*WNP-O;l{a6by2#Bn0ck%WnA$}(Drx3+32}rPSpLmq@^pD)u$^WN8a-9cwyp>et@!U1wF(U z3-Ng4t}t5wKoF7HSY@Q^$qb$9!}MXM0g$v;z+0RaqqnRU9(cV?SL_!atj~E2vL~Qq z$bzf{@9r6tQGJPhHa%^Re{h)>^Zkqa&59URF#5g|`M<;wc!)diM=D-u5y;o&zfAlg zX*a|HSP(9t$?T^JGHR$kCjfkgZ-1W)Uu(U5uI|otFu^i8yt`+qc_`unS>^IL1?;f!x-Nz(ir^%)K;qWh_dhk{kSH~#xORir=iTtlfEV-#W{An$Rto`n z9ZXJ3pf`F22hY=!1O#yYz(JE;Rv`gnP>27z*G-xLAL0iBY8@7_0(yR+&9GbN$M0ib zCZNf7ke8yru3d~nU&A;GDliEy8c|F-fix7|zyKrQtA@<|1$aq8HG8O>mOlwK<;2{; z&;K8tQ7HA`p-y|SnOx(}Sr75cKnq~a5!9gJfXGu>z&YEEr}f#HN>5~8q#a+Jwn0%6 zfvLZ#Gtoh&)a!Z1cgjquTh!p`)R$7xg_Ht8kTvA%-K^@(D@rgxU>>(n37B=fIF7}Q zt&;FtG^=6*PH(!~gN_RPfZ3zRgme=`TcamL<#A#ClZMe>1$^H@k-+Dz`M{^}#ES#$ z19`sl`FHRo-(b6Uw`n(Z@4vtLmmB9aq_0E1)MnJKY?=g>^66wu4Hf_k#0Uwdhh6NfDtsxd3=uoA1Y7xCd{lrR%yXUB2D{*LE#7Q`OtDv5(h6#)9Pzf7ycn1Bj!e4ub=8q3;cI)L-YqDhoh zmc3~Rmwq(9L>_m^ce^XNDZTd-wop&p+>EJs6u#%9naX_xn$9?`px8@G0Gr@1Go?`% z!U=yYQ%YkaAA4%Ch33N&Gsp1KR9^5``TB;DzK%=jUzL-h7DqpVU+MZsX#8(jP|+FV zKSetefKVwN1a87l5H1sNTXoAoTo9+BugTEtGy1WEPw!}R_u?LHN;BQAw^!t=w6ZQ9 zyN3$d7QwrBUyw&DX+^XoCMluP(^z%T+$Lw3{!K-lPQ`Viz=G@%VDt1go^+|;7#-W@ z23ikjKpGGo7el9(^_7Aw6@BNw#+V+`C6@^bbX_k=7gnI1)1=>Q)X5<*ksIA&_;Ucw zY19@N+PF>R4-W%O_9>ku@;_MSOJ!sX*_^D zpB=#bEBlI7BPW#GHt1l!6H?y#0yeNYClG-Y033x-^ZcE7;LV2X9LK?6>&EFTFs?B` zJRBERk_iPa0&zfKI*ZeGvd~&q3FVaojAyI{$WeE?Cn21er)ytH!WR5}MK?N8gefef z1`a3zt}o!fH9)AXLeT~1b6xou)i~M1?S(eu$Km`zseVstgog!>1`VDvKU61IXNYI?NH0RzH{BYOf=f_Tcb1;s3 zxCR($uD@}|{C^S1T#a9^J4-E?db#giHSX_&JV&XO2caTJ=p80txXvK}xYMl>*+*yp zf)BI-07>{>R6rS|C6b^!{fF?6$kKsNm7N%PP%!}`Qp-TL&o}{jh35-keygss1yT~G zF)7&IIa(MzkVgWoM;8cTq2L#e)XfHmhYd3Si{^&iD6HpET_ZC!fW+^!A2O4Jke7S= z%T^bJqR0Xnui&o7N8s$CjF}AwERf|KpXisiBg(shu6X$6r7>*)>~*Jl^iPAv11Q5aSH05Jl;0qZzm;2CFbg-OyL^BlyRdL$RH)f)oocqZ8Oe;} z6;0HFsYcZES23fX1EIEw*9jZ7Bd7*JV4m}+fa9&j1t^!Y3QO^7zJ+77!RLnD(OxCq zZPZPTzsn})7|8M?BR$TT^sFt@R~^N!-k2%i-Nz#G9=dD}v5muNlY$^X@_~f&9RO|J z&)-{Bc)t?z0)dxzgeC}tFMti%2+l&s|Aq1>B3<6hBP+Zzn*s_mW{(!hF7H-cu*U^dvK z23SFWPiNs&CfCPRgYl(#x<9xp&kN1yH--E6k?YF?OR)j(FIx+jP2F>T9`As zJKdgM56-vMV*B#_&gT{t24bW|V^7B&0XaR5UUNKm_KA6(e2h!c3yk|f2Ep8^VuReI*&)?RTY|dTOFJzB%D}1|n!eVx4ilo%um8&TkH_vBX#9+;tzo?y6@CWi1 z+!y}=y4g8;;vWlERk}QfxrYz;MyO^>Z^g}K=g#IlYz5PI3d2c$Oe{XlDOPh{Zzq5p zI+YEwttk2tvdEuPF!`2#mdZsQadIez_=XT{8CW(}T=Hd3PSxYeT9g?FQ$r$zf$G3e z!|vLT$>_PhhGZtEplH_4YQFc0NeeQib>_wvZc=?d!4D`e2PTI7u_netFuib|KFj?U zU(k0{CyX7E=Rv2iOed;RyCVq(!UT6zloTCXIofRxR$6x%BYQNAkl+_tufCxgOEH1Q zy)QQ8zTW?vfXCqOcZRQ~9=U*5$>avjF_p7_`I+ST$Ie|Rd;$uvI3YC)$fdu--=0=& zwHcMN6#OWXIC*8FRa*Xhqr?!nQ;=l%#DS`VW^Kz>=9e4y2ipF{6tk7|WvP8PYTbWZ zH?~S;ergP@vR(3#V*>f=vRlQ(XyMFKfd@*X+<@aF=ehXw)2{7#`0KSI!}}7I9~pv= z{&kN&`*p|PJ-_fu-`(XQNz~2?$%Y;N!7V*Mxgae@>KJ=@67sIpw(mUPM zWQSIh#f1|`xS7w?ii3*mRzAf(@jPn`rt)*=(#U>y)&5yTAQRK$ zT2|fVA*}~rt};RDrbAHpA$?Tmw`I!Bf~0|XfPb&1i_Dj( z3ABK3`EOx^?(w%v@Y94h505v`(f6F{IXpJViVKD#YUCN-xEN9s`NZ&OvsW6M{Lu82 zTdIUe)PY-Mx!kT(Iu&0-2uF%?z~u5so>zisM9%SMF6Qnj&HE14UuBo=@C4y^=EF}h z)|ft+6AUK}bX;sR@x8aF)onhd%8w@zWc+IN?OyFfrt;iWUB)xVqdb90$G=$6f6K>E z!iFm01%ySbh_ce#IDASPaUB@)zWCugcjv0My0?r&sf@VL@UZ(O4^M#Y(-wO;gi3kCAg;!6HP}kG*vmTpw!2m^VUD(cx>2od(VmpvT;79E+ zK77(BjY784KOYfuI3i+pCE9cx>vRa}=|_uoaDWS}ubyGz&TYTV`Ft7oi=59jK;~mj zIjv}7e;UUjxF9>u2!*rDmNtSVWzkf_8gDInVI}Xweiy)p3MX**D1D6x`z@k%9UP;u zCp}*jfZC0MyyGES6knvwED>3-wjGWS<`x2pVNwCLI*;W)GEKzg(R_qIP5e$OSj7eipv)&LNn}>@Mo4hZ^?Fq&{)S9(Dp>Y zT4?F7FIVp`5}z+aw>or*DSHv>hBq4+VTX`Retp`d7P1*bUPbPO5SpK$>D_sUpe~iK zT#&SPTtOAx3-JdQ%x(OviQ;32S>?Q#f0ZP7)h88&AdDa>j=zTI^*rC{N9TO?3*%~j z5xmP`IT{sglrK2*P=n89*pX;`Y_C(bTTDqOI_ya<(VHbSza=8$qO82LeIYyd6kXgEXXBt0alj_~QjG!%=D0vmhlIf}U9^qR$W}Q3xW_q&6Zk|_i zu@TeQUn;!1Nt`;u%{Th0X#R9SzgfB5Kzon;btoTinD5gB_GN{8`y_9>H1j=Nr9)^= zNn5F6i<@ZMj4a(At*g5~i`mpCs}slTEyFE`c9eDhif=3U%UH!L@p1KKgY18!ohvd6 zNuTl(!ngDV_*{0&9ShjhnFn9_TY;YZ$sOD6s>-&j%-4KV0~t?^VO|q7DC+e_pI?zg z{4?%fn9lXILJd+tQ?k_&oP5)wnX;+@oN^4kWC+2i0b+@pdEy~?Cxtt_dZ4qm!e(0&(m4CN&fM2Y5pgO z?n@4+i>y|0>~kNrSQLHfL54>jix~sVvWKSyLsO#ddbC;x!(HK+Ctd{F4UB zZPdco*5OOwy#NPd>y(w8T9yf}_ofP~;QF1p!@KlI6dmX%v)MBsaQE&crD^aXi7)AW zT2I_47I@}Oz`N#yrWcffo&8|3Ozq^(7rw=#yi`ZIUz2@dHbAg>rn1*p^s~PuGB#3< z+cVo-wr4OAtNY}9w(UU-e{PLEP9uK{%p~gH75DpTx0w{Wk+Q-tyujs=|p(y0^mjqoeX* zBIHj`X`DRk#oJh2qt)M9+l0XV^Wv7})2XWzu6@YhJOr8w3PKo93$~KH!u5bny+JdV z8B#p-3S(0c1N|l{q%{u6f*{8HKq;?U{>-V&$&k>jk0dU?kD`NssHVyNrbcXu?Kfk# zFWA-`qQux*Ac1zJknZSLa&*0nDu~;`=V|ZUGsQ%K^tieDr&q>R6fNTHHIG5`2!F!)bVkiyBhwL&*+I!Hd!V}f*SS}^u($2anGI!Bi|?sn*1a)rszV|Tuf z!8j*GS3lA){CgNoAv)w@=<3?)b8Kda<@PcjT|WP7yH~F+ZyRao_Wr4TLp5}0%eT`n z%k`H(*@|x@^OenV)sP6|_Rb;+-?_?UrkxYKJXpvM1f!TvtQ{Evqo8pz_F4Nf^dp<6 zeCTjV*S^L@=mWV9-}kith7m4*CFq>JP)}c^%-OCe`~}kRZ9QoD_R<*vl6Aa65|m(C zm0==)L6^_>q2n$0P5{;U-qiC)wt(*YWDdUJBcx%)rh4%QZo~4W5B$2#CWjajjkCeb z^ur_WcWKn5I-F7o97TPTj$TL8JyHm57P4_U$h1znOy3bhDf!$JE)McQ`JPiUOcz<4U zsoE#N7;tK9mp!bHfqVNsHK$9fUMw|O0WI0(=-9~F(7|_p+)5McPcjF;3S+A4ZsvMd z9`ZV(UfRf;9YEsU>;iZYfw(-uEoC;};MQSM7GS@#jd?0mw1o>+s??-fkN*CVw}B$BFnI zQ6bhV4FWsPyhX$I<_7>og<-8q|mTl?kMzN9=$6cOlHZhq-TuOcqgijY=#W)1ru z`=S}(cvBI3APvpL1(?iLa1eie+WAF#;?RD!z@EFX{X|r(_548A?pO9?1EEcIHXiJO z_M(nM3Ec&Upq#-!y-0E$pe{qWq)=ZI8U5XCJlh+y$w6IfU+{#f$cf=+*qgr(zf307 zTJKEvxlLpVgLGhD;MdhC5`q#pM}2i;l#Ndv7veD8U0TuKU7@TMZr6%fCd?w^l@ZaE zGKS3BAbfuul3y~;K{(XYXDAc&5}8s>lQ0?cnWR6U<6~wi5@a1${Xm@u-H8829|ENp zR=(B{cj7*6gZeQ&vR0rz@xb#JsznSMMi0KFx11>}a63ODIOMc|g}s}aNJj$?stxXf zrHu$=@P%rk41OK@17u&k!Ma`?OMu+?=c49g^hXY(UHEG?pbcMi!T{VPb= zE^;lqsR^hRn=g6X{J$OCRj9vwbMPOdNXC*0(U+AQi-yu5 z1A?}YQ&zG;aK;uXUVZu`ddQ$%+*Ee-`ZAX1@#?x*96%zVASXO#%eZ>H`#*B~%Ib;m z>cC37b~ev!=}8wsASbfMOvu>AjA+<7Oo#&?*8^qpG?NPvaAO#ODee9Q4^Z0=ouEqX$!{{a^Kpp!`V!n)v?ogETmi{ zP=2Ug0^s*d&*r1*r&u*^I-;3Dk|7KI5zEzX2$*4k~D9Vud z$B~BCabC=dUxyN0Wky>~fj++79_mf8>+}!Sl^=a1{>n3%5;K9?JD}c)1J=bCqc1#; z#XE*(l9|M9Bks2oFs#%h|NHke;xlDW#?dj2HD>?C$$mUtXVJU={e#Iu7oGffguiFo z?1(PyQewOy4%B)9zm*DdEveNJ{;lmlxR@C$Zkz1o6iQP6`|I+v1X*>=^Ceg2^QFI& zAE8Z>y&UXoxc~mtmoX)0A>SP`_VGN$zsyIEDd@Ms164r4ujR*}*k=x-#^S53Q-M}* z{q^)+nge-IfaK-B;b_^e*@fO~4r@$@9B5@bpr^}*{`Z}KeUw(npqTuYm$&@Z@Bg#M zb&|`@)Kv>K0#g|fe5f#Do)~%JWZnGpC+iN+{Jdz&Ono)r)Dsx=t8V6>+4$vYb?vWD zzLVeghNkTS&ar@{Z2HgGq!gobDd11*O?n#EHbpDTw%+b-{4JI`a^ ziuUilMso^-MQkDF#LPQmkTNat^a9sSH&0AkSGH!w#e2II=f?pP7!c^aJocTv`|`T} zD>onSd<(QC_w1}q6_4tIU)~b~mUJMnqveNTW70orUHA9* zL!m0*MZG{^arJ4vtcKp7%M5e(*7w~#EVgpzw&2Q#YnYc^@^RYE++OI}*z-EFHo z0wNC=TRnPisgh%vb$e4*?onUXFGgTB1s$*N%c}UzmtI=@8oPa=X!bVv^^6NHpGP9{rGiv_KhMR zpo6~xM@+uVlzwF?1@>S;FsQr_-#;fq&Ophb@cZ{i-x9sUfu+=yt)IKUvYVL~O#!?7 z!K1eM@>=VnY%a_J4i7!@e1CfJp=s-YEw~l0<@K)q|52#x4k?w-xO|7UT~ z|GSg4{<(sbG89ZYw^KK2PPu=NWns;P@HTDW)gE&qUIFXwxS!{(ul|~oADIKZ%nE4e z;-GW9g6ZeFjpo;8e~F!@-y8B|IY)Gre(Cd7vs!^e@qRb|>Wke4=b?&yzz*^eU;{)F zIBFQ9!4X!~yJ6}4C~529?|*jJB|dBWrCDviUi|9MToJJP2WE@des>#AZ|U4Py*0IG zw@pr>Lr4F_RbTiMyZ@?0-db`f_G~dpr> z@Au5J@6J2(n{(zpb7tn8GxJqN=>sknITio_z?GAgQUw4&U;qG!i-C@O#?<|dM*i7Z zN=m9&8XE%uOtG%9-^k*+ILEjib%jOzZHQr5V3% z=u!nQW(?yOnH{e&V}ZTaSv^x#~3AEzpK0+p~vII5I$i_3B{h?)=iQiQ}RMM zR@4UjI-f|gK8YyNk+UgyowdLm|424kvP(tgK9YjrR=GXhZIcrX4=Hj9QL6EmG5i+{QS*m^{JIy zRdEv6^DIIAsO$V<9HH_vlij7!zm-{nPr8Yd!xvl&XbV z_YN3GXrbuC^pK4l5=;kKh%*4d@#fzT=usf%1_00i~Uwp7q)M)CJe0?n?u0Kh>~cg)d?Scn}MJ(tTl%TGNmVM@gK{mY)r0|E|fSZXIm% zhY?OqqJkL;a@uludY6tHl;J%xGB(ng6+Q6Qur7W1al~Ep{C4aTmn`J<%*>+?8S4(q z%wRP2QrePNmrQ+PTH%;xCvWy!J#=7c-&EKg9>P)RBXE~cY{-N`^}!t1+aoKMpxL2Ya6tk1JpjmUEQ;6&FmDFiNi4PfwW-M z!7km}AE>+M2$ne6nDU*z^CcDEfj)x6?94fXG~vZtuhPnKUh}x{D1r8Medtu1kScDB zZeJl`8?WpXoh3->|Hv?f>4hi!eOIP(F+FoA-0N^BBFeUOPE~Hy@RH9_x#6e`P_Q{q z$zBX}%`$ZB!5~YAo|vgNmeKdcx9{7C>tjZ3`IFW!eC=v8O+OR$ROGXT!B-SS@`^yb{Kjo4vD>p*iC(`;I z89iKPhd-Rk66ICFAr)x zKr_Yd?c0U~V;p4!IaH*jF>5n~IYTg>Usvu9NnL+NT4o^Pmb{~B@9T&Q`IY~_^eESU zWCmPMI{&rbS@9QC&T%@YE0CJGZprXKu!%LswFaoqZI+-$(JB64i!gwxKJ2qd#Bi^~ zQtwFSkXTvG#R40YKX>TZh*E`8sD7g@Cg9b~ez=H8cC8Z>n%vNrrhj{0zc2|gCy(jn zv&9^9Clk{nRUekI&P(N85c60;B^&O{iJj5dwE6wr(A5?+wU_&@z`Ih(N^Nnnk(?=h z9JOS;{g+p(dDAX#j?arcDXja&PUR=UwA2ZPgKf*B6=S&lbAgFjt3Q68p{_@>i8>Viv`U`Sp zpA|%&nPv$tiOJCQ814Rio|@3I)~$2061h#~f`8QdW1a?2epgRtv;0Vvo@Y^nE~n3S z2lvD_HIb;{&y}Lzfy)$*yg43SKE90Xug^{TrafeE;_N88gT;#TfU}7j%{n-a_B|{V z!;3~8hL^M$ie5MetcoowVx|(2k6@mxi(2yNY*Ef4XBavvIlcur2iX1Yx!C;e^;_>1 z=GA-p2K+9D~H!Tadrd^tIXXn0m5#r|Lv5H1D}%>)ea zU%mw!8ETNXez8-yg;{zzp^bLYD-GR?`&Ng(4Ocxn-o_OjweS^)HD^UbQ)7)lqQ!WY zu{GxzpL1>hTJ05!u6HWGzGswKgT;2${xD&~ny=!WQ-xrvSYvcwxvNbnyBS z!j*P=`7d%Vo6Vd~7SZwuyGv6PiQc>DxXHnhbExkU!W&-Ev3ILnT&z$eA~~soBEZ^S zsC{?b^n{>QGuN^qJCivSX(z!QT;O%PmQKq?XhF};-BbB=A6cOKy6xsc6yz}qe!P(0 zz%#v4v)AM?&R4CieTxZ`ksoNub}m2OWbx31J{;zKfr`~V_u?wB6Xw^R-|U$#Sbbbf z{eE?~P26i{XJUPQmUzlOfX0xSa`4C19OO z^GW(I(<^U$FCbsMbYe@rr>$=v51KjH*x1Xuh$RHM2CgHwY&DiF+Ar$`rrM=(yZXst z^p)rd=L&Z13o#}9y8>N)SD4e6xeH&$<$YYYd)5eqJTdA*(F>zmQ5A`7H^Gq4^Z z1&dc?Iq7e3i#tJB{!<*Oq?vPFS?Df9vo@`98W-|^n-ot2f9359B-Gdu;=YC~K*D{_ zB&sPsZE^AT4^<1~G^*Eg^sPpXe}03a=Rpz4NQO6B&uF(vb*c4WZ9-G4Y_;N}F1?I5 zMcr_%SkuX0gu87*j1Dglven=Ut8f*FcC}Q#*DLs&;2nE(sS95!15SF(A24)aroYYX z4S_B0-FmN&Cbd^ygSUU57uZbxvz}0y`+kD__vi;Ig zOEAV&;WzX@?Du`!=@wvFnP>ainxiHDF%l@P>qf-Wuo#?u6YtA|q|O>n3-A-X<}CCy zS@2%Qjk3M~=sC!W&0BD`!#b|9`z^kG*@F#pbvxNPcX=FVM=_@6AYuk_mA9$RvrFV{ zPqMv-u72{(&U#KzM@6TDz}lyLsHV2uv9A2IuH3P{{Iq^(s=B;_t0S$CVc$sHI-_2* zv{`$i)O1mdOLF$JXleVN$?xP@=eO6xD{CO^WeD|{Ztlu{eZhf?&1XO zYF@%yR@<6(U;mKJunPEphos8FO zF|n&FfR_0@IRNvVV+(NiR)5d(ZUlP5YzjQ4n1eD??7fc)W5-?udi}-1$jhNYr6z;# z`x$=G{gD)%zbhHZCq&sZD1|!LQory`AmR7F@gfyhujOwN;4D3A=KDI%cA23@nJ6}t zxQ%0&jbq5?i%MaG=C1;YWgAPT`>)LQKb!4mY}?kXd~JvyZB1~0od4d+yt^>?ZNa#- z*{6ux?EdHWtg#vM*S>OYJ6$8L-vni43@KKrF|+-C@lbH^j~!-_v@t4@aev_5-}%83TLXEjTRcFh;x>&Y@? zjc9@0$ubX`kALr$zs<}$x|X*|VLQnUr_JZL0CwTqKh?bNX~UP+V-f1Bsw2J_vb|m_Yf4v zL|DQ2JWgkWEOO63v))!VizG#C$GblLYCGnMczAL)@5_?*3tTQOD(j0K!Dyq=$8;<^ z&j2i@PimD~lRrAIR$kWe^)d`Gg>9;L54|y}q|2GzClauAG2%~^GAr9x5u~V@SrSlf zb%8k-G5>7y8QJ>%JA?RrC!WQh4sKad*_mF>S+5~qZ`$}Y?<_ZtuT|_(2yV*g|5>1Sw-d3e1!e}u~a`5>8hnY0e5_KEI-SWgz^5*{7+vt)HP-#oQWM%4hzMA$T*q4I)oVQb(tRJV@i5e;xH=oL2h%E39Xu zl||hl+lmZezd6?Sdm@;ugkE)B|Ip2uH3;JaZUrai^&b?GWPX^4K? z-Ody>A~ZO@{QiT2(2H`;`i=K9mEqSZKe)?fEWy%>R^aV3&5ig+e}Z^%KXf7GJysqEDzL* z@FaVu-?W1#CidyCv%2)x54>nOE2u7ig1Lu=)b5Qbf6FUmZ3J-vFzIqAsHf*j=(72# zH9lPjW^}Ex!M-f`5{_mqpWb{Vdja#pX<>Xecj31S<>id3`nSmM+U9OLk~hkmQ6n>0LfWQGMB z6RWT%J{O7?=Qrip_b(&Hhb{lw;mZYsKvM-N_%Se;%YrP1EEwEsLyJmBL!-y1evf!A zAprs{)mHi=0#Q+$HG#G?baXU7*ePYf{}5MYSBPT_OiT<}1vHnDbsc3{SujlZiah%1 zYS>jzrE~toABo;y)Pk;^3k+S}C)b9w@Wq!fRyq#nMgm<+9Ur}!^21#H|4O?L-t4|Z z_`a^v`%H2DBdBC8#N?zbM01{Vj>QS?I&^yGJ}9K*^;gp#xz2zx9ZJdbC9$~X6l0Rf1usRz`!7h)>qOX5GDo<&~xZI2*APg2Kb#%2-Cml z5Nq&yFa?(q6GOdtC7#P&pAPt1HlQ}zj^ZD1E((d0CiS#_WX;_l|r!g1dpP64}{b;ajUT+|BL-f0~@GZEObBHQ=*tnKuM8|qIxv@_5I9PpxG%okarVG-#P;TCV>g5LQ>2O>5jO_HgcrT&;6Aqg38cC&lk$VpLC; z+M8r>6>jX%zf+%aTpiK<+EIMQQQR?7aJsSVdQCNtiEB{yQ8X#O>f-9;eX7yl1CgdL z9AO8K$VDDx{8NON;_51M=neNCudQ1J0`Fh9*RPXjeNK`X>%%7UH7Y1z$OT;%B=KeG zc3_d>B3&U&F|EyjbN1@yX3wP?pN6wb5~qE4)`wDM?ifz8kBcO$yR9{uh+i;}=B0#| z$@d{p7JSk{x?0&WQg!jD=_chp*)EN2379sU)7#b!*W5Fg`c)?w4_mWNCvV=9Ib~3i zxhQSu^MawGidnBt5RP!XWTf*_7nedo5~`T0?Vxb5bxXi!tV+wSa-*ZUfm0wRUgoYr zGB=80S8<6^%D2CE(&}#vtZq-S8XitRG`yv@vQruzDX~l}kvphis2NYCOxMaJA7E&% z66Q$Bp9_r=j1{HN95wmUXAJNdu>45zTv#6S-p)kJluZIxI6Bb(j>zSE+Up z$}}g{3-z7nH}rY8P9paoMO~t-8=^=cuF%$BFMid-uPPgSh&7$9ms+Ide`DWj5vWwL zT6H<#7mwBpUai_Jz)xPh&9Az*(&RHAIBzfg(DOI$KPVuA59R}>M0#I<<(ol2|20Pyt+*aWpjP*Ke@gR@ft=kRio+VtvB4)>k@5!^?pv{M2M8t_@EivzuiSPp{e4#u z>hTA6y1fhxodzXmvMFyccV`T0HpzbNXX>8UT(0hs&>CkyD9=nqbs1+m>c{*{)lK}+ zW79=sY#B?tYG(p`R3mEWOw=DBpF zQ~&tr5oog?fcx^p9Cp;6G=|L_&ocFo^3vyZ^uSDh|4PyW$DUYT>U+1m%Gcc83gGi| zTUl_~9CPgfb`rzEXIT~K-qRw%sH|~97d-efnLFCvS12+*4i7wB-(?jkuoud!6(a> zE-vDjUL|>Y;3XAOE|s@w`12gB%hX>ERP>V7LF9<{dYtpeH&m_&yY4qEv=WU&N;~P) zFWbCbdN3{Sm4bJs--M!!{nfiZj(V?cSjUQ<{!+$^73r5SC8&c0y*57a;M2%F2nmx6{n1SiRymZW3=VoYG zwFdUyDMbO59&eFv`mS$oQ|h#v^Wuwix3uZX`TD%1UtX22HWi-iYSZ&Gh-o`@KIBg9%30`sJ6P2O?aTDaHptmk2w0nYt7)@I3M9&Li;NyOGcg;L4CQjLtM8 z#9u?2_O-}jWTM%PuZnR5@b@sF!$Hf3|6$DP^W+wHoBrvM4F?0VToFvNAF_HWa+}gy zFm6Wg6PAgE(K%NtUELYYI3CpKBU*HYdcKyjYj|ZMdL@{~`7lDxu^Yb==xUd)H5O#2 zu50%BY-S}-#qS}*3A)0$hgR*+fYW8QqJEeC?0pTq0o7!0UUQxPUH#hC@MUB}SCQ-R zANwI&hWZ_hcMn6YSzKYO!>oGMg7&`_9Sv})L?@C|qbbt6lkM8y3u14Cy<9mA5Qwp4 zHdt5aq9Ryo++WC?0I*y=(f}|A*)C~=$o!0P8DE)fdqJ|8i6^Xi%x@IOzWXD>$>L!; zQyJ_0_w^VLT5NBCKpJ2y3_zfwLa4%;#0v)Hfl{o^QM@5(i-kS^;)t^1^yl?r5;G=KbRVwA@c215BM|8C(5)PJu7{kGO=j|-p`txn z3IZ(Kb}XA@iNc62%GJbE{yV~d>XVSlK$5yyPzRL69x`~)g5M#=o@{-!TdLuYq- zdBnVYrD;b~P>HaZKRSo3K5X^2LYp4wp$y-%Gud$e?pKT-$)NFk+s5g-HBwyevWEYJ z%I%(sPZ{%j9KS<$ZzWftH(C^7zpO{hidm6S7DIi7$)5T>SNDU*JHyskSnFiM`Uezy ziyOBdxae!v7mDpuy7z)mr*ie1+B@Csip|cIWn2PooZd?Y^= zb(A^hu#<3aaBk=WRv<}pjid&TS#cIGw0?}Kp|Wh@MS6MJH}}56xOsK*^yEH2jxxT) z;Wzx4?j@tVQAq0$=`IqMg%YOoS8zk3dHETwN2ZciZ8e75tzC}YLaX+&PFs1_Ovpsq zVuO4jK-9L|A!kQsE}9ntm*oZ{_*%yGOIV5q&oIHi1sO!4Z2p>i1?#oWz1V z>N~6E7lBipes8R;M=k07N?`%@c)NI;8W+=qmN5N)XApKBb1$BKa2H)o$|OGY({k%B z8!J=NHFC~p26h-Sb@0uazdUSvg{&j|1VEIBg{xfE-? zs6+`uv!8%K(T`wp0WT>u)EL-QNbd;@k5EOWl>_N8gkpvfz5wR418WG$!HdX=gZ|BU zo)m_fz=z}W?1mlKV4s(Y{7UhAt_#i#Qmk0m0STr=j*{3{RMeeIA*IzAms%x!=A- zx7XND@)F1`NONp-#Ek^0oR|E1EjsZlcVPBW>WY$JmLm$=$k(g&Lo;+&DZJy{MF;ck ze`ntFX|O~qn$8wZ%`dg)xR~2?mJOMRm}Tx5Q(CRUA8BkUCPIBh{hD0c&XBQY@!3Vl zWh0!9&y;ZjzF}!zP~$1be#o{6U#y>LTt~SqPk<~})-%!oU@i2E1t)718aJ~|ywK~q z_<{kuFwOH9w80;n8ntgd^Jc0VS0l+-A}P~uS|xy}0L$tJv2Yx+s*JlVLe_wJ+{ll~ zbB{jkrN@}h+~V}x6{Ph9nO_}K+e&=~{?J3PSvB9(w6{p5-Sck|a+%a(PVaO2u~~I{k~>+R^L9e^y@sL^%?|cjs}>i&~h?`9jqA7tbKorMFVj7rKfB;xlPw!f;hi# z$bP^?U05`VWR=|@09Z%I-nIt+;|eq=pQNZ_`*_r#kR)yHDfJ@NWYo)5_NrmpY#-pv zmjc27OqHD{Xc7EQG|+&wT--ukfre5H%wZN?MEYg75f^l5_<;P$If{!P2BIGi?aS{N zo`${s!R7q>vs{UD%)&UrewpzJTN&Ctoc73CElNtaB z_PX$T6lqYnjP_~bZCH7`Cdu4%Yw7F~Ct$7o@2hp6s3RdAQSSkmrLQE}>1(_(aB~14 zKyt^e-`~(K7>>zrs_}rZ{+TNeO>|T);eEYPzdP5hht;`yCV=ZcwB2RYQ^)6Ey;09; zWq*{ug!-vg_)OKs=%gP9{yow~B5J0oh$P#&^vG(GZK5~drFI{$BdsdlD>yH-n@`hM zxH|BA8d@~ht*7Qoz%Kz0=w5-hdc8s0pQ_F~mjZoZ1c{ym&w76Bj_jO%&XVRRG3LY^ z`FopO5~BHwNX@a?g6)I8(rC0J{P{T3a2K)cjb~u1r~3+IynKkvN6n>5K~;JJ^c>oq z`MfKtLp>PYcw9;5zo5{Zoe3>(?ET`;0z2|n>Ob*rRwsh_YL>@2ajLF|921Hzefho4 zNu&R6HK|;}qQ_;GRDu%AW+~g05?!sKtUEvz5X{-C*Z%Gy*Qx)Awbv1n%FZu_dT??kG)Y@L!AQ+7?AoaBo^Mv{_Hv{OP35Na-sHiI9G3#-z1RRx^T@h3 z{BF1OfN~np6rPj(v~fGU-P_Ah>ab{I{^W7l`=yf=)@Y}76ZC>*1d)P{`$LbBx&Uv( zb0uBjWeerX`ET^*e7Ux zw)uq47wWH+Deo4IHqeflS(CS)6a(9@S_;RKV)^9-&^VU*EG#FpkUW`TM;@yECv7c1 zDZySiyKnZ>Dg#2YuAhop9q&0d(CqI#q+PzTb5=ihCxg@9D^Z@6D$e$P$>M~$B{5~V zM}E^HnQ{Q;k1zOMXueXmM(of}QZcN_mCY51{;*(?9)FPLzKaeg5e-JOY_)H=C9|v9 z&Z5G$Ss)G1fJQjs$aDc?WNdZhI(pJx45&SOxTXGyodsVS)9B66(Mr9#>w>Ih)IlpC zy2vXVbpP%GG-^Kf`?ycq!s5ee^*hm2Y5Fe+b2KjFDW31G_p%jzy$*Y_RvCn@9i z4`^MaV8TM{vvz56>KTA`Xm|hToH7wDKq8{}gwrptXIyhq9_Z1#&_O(IIeNJ2lih1@ z(r)G`%y@EL%@VfzR+$)V;9E8ba~xu3&kF{fgph1+=Z=3mrF`-VL_#|YiNKS+Usjl5 zPxavb4rs0^=p@M*N|4Fx0lMgV($bpplmuhtG>YQ|Xk7syD5=~@qyS~Xw&|Bo(_H=652QH-GpS}Zl7#wVeeqAXSJG(!{(t8(l)15By zz4^I68a%?i%t98r1rcjg1t|K+_iuHX(x?HR92YSsA)mObI@XPX^?kw8qv-@6Jd|XP zfPaaTG{-asGeY$+gLq^17UoJE4!v|<0ib(v8^J$XLYdob3X$t*?D{p+PMQ5z=Be?w z++~1TtZdR_&B0mu3G%^>ba@>SUfUC#og2o`5a;x~@TecJ^ZPX^@Pk**rL z-p)FNa@5B^oqY{C{6tVGozW4tEje$VJ4z|j} zC;!Y>-iWDrlZL9ynnq1meG=Ls=cXC1>5EkPmyDk8csR42z|rzQzb9PNmpKz zmm#*R;r<_isK0PY73Od(n;E|rpS19oRqrA&f|8LT?=#Q%7J?C@-*OXhaN!5NkEbX! z@2NcHS~uKT>iLeO^6Z4%n3t}t!l|<@K4RFg!Lsh_zD-w|61fAq&sVw%j&{mUo_0*} zBRdikS%vS!$wW9m8J;q10HJc1_}U&f%PgO3Gr*uhocp6U%TACBeOmXjjR|cAN-FHC z-}wv5ouwJ1E4l19latE@E!VSt8boL+8Rh;T$6pO=>jMBc(*N;liNHzu&;23EPTWY1 z2Aq@yjiYuQy&$vacI z+71;fue%7vRNYTfJ&tAj2mr(w0sqYUf20V`-{*|;6EPWw98mB(L!bYr=3;?G+V7NN zD*>^aiMNm$s1^hOq+{m{nEmqbiQ*Ui+<3c(JTh&)VLO5#FDJcXg0=I+ z>|uV8pSIVuYlGu}F_499uu5MQ>}kjYk2S3#6+7<1@Cp?GuvWgD+->0gUDPfiWNLH9 zIA2*`<>)M#DE9^k0NBlUJS+a7h5NKBFl7sk;kwr|U!7^S@g2IH+!&|~7--e3coRVg z0Q}7BkHkNIY>$cvKMUZ)Yr4Eze=OLgHY}vqD7ury@jmHc1j%kHV*7KED>G!XZQK9x|^( zBd8U)A_xFetUD@}r_}j*ZG_k!2A>1JKbfUS7>BY5U%>A;^g z8Ru&B5FUmP01y~1H(zxy&LXV~BJkc^v@|B*D8((Fq{*hzhZcyHbHQr#2?RTjI)1%H zS%fCl(=>!B4#Ub(30tlPtc7G>MvgVUz+AQCWQGN(1IR)OtC@g5JP)tFjx`egD;^G$ zfML;0ce2R?9P~9ecWs`%_MHb`$Bm(DAh^j%m z-X!$M6C=Qe(j*c1Ujl&rQOD<{k--%J;FqO1Q0hMk74WaluO=Q}1v!`*%&?a%y=~fj zNH~nHq1?Tf!064sMkGaz4M+nJEI~eyiwc;VX?s?zCiL9jbJp(=!iO}ma3pV$tWRWs zGdoNif!C#K(S{4Cr2y_rW2D&lv2r5DX9D2tm~nH+{Ra23garTV7h-V4zX3_xzI@~+ zQhkbVib?L+PF$5vmJ^M|Pyw1$+Iaq#q=~@y(6hqNRkRXYxX@Nf=}yiVF8(@ig^e)XUN_VH z6)uaIST7<+SYV6kXcOopjDY6N47GyLDz(a6=d{(i-t4yN>~_KI zlY#2i>6w)pg#CQzmzN)eDSzt9R(-J`UjEv?C=Nxixc;|4i$Hyynd*TG;IPp$qNk%$ z<#oigffHW*9TmIl5smA7lMyKgK96pr97Sg`+%|6IzN!8oV$FBpLsc^R<<VZXYUzo{c=1W(b-<}_8h>I|tHxjALVOTfjP*RqAKOjN_)6i> ze2f{>erxU{yb1A#;b(G9QPo=yG%!%HKS5D%7E$(Cdd&hn@=aWYz}Q- zA=4qB=gx;X)2Tir4jYWrZX%C?R~)0M35%w+VDuphpFEO^88H!3t-9WXUmOqY_P{lGW2nhFj?d>Sn3)KHVl#R`fM<8eMGU5XxAyRi}uzlpY1ZxzF-q4!#*%3 zDR9^B5URU@fazfQ@NBeYJlf|}({ad2=Q-^_CKuLf3fmNZ1LEOD!u<< z@gw8up2J)RvYT;f(U30XhNc7(Nr^?H0uYFZ&ym{z>_5+lQ$#jv50B63C`!NwlndDy zgMoj_mDuFR*H!glu8d606CwY)3>j97!0w|aK;z_fKMMKUwjPivNlxbt~ z8Td0(O^yVxkIy744h*W_K(&CmGRe;OBPB$V#~9BBfWNMe1ft=~t{^e<{WQC1_ygwu zX`oFt*bE9IME}pru1O#WgANbhQv`lv zfu-?IsvBX2K)wv3$C0+iW zK_D~bsDU(M%Dstc4NXJ_;GXVj@#e#_T5k z-hxp6^LrL49)Ni{LLcdM6(p8@4UVs^%irTyc#LMu{scgiyh#jBd;C*21W_9SHA!|E z7f^6cZoFw@iM9Y7={!+yi3zJct{BL$9U#wUCY>w8{X9cL(h!mOu_LDlAP21Q7#X2p RL)t6=IcX)SN{KH4{|~a@SHb`Q literal 1329 zcmeAS@N?(olHy`uVBq!ia0vp^3LwnE1|*BCs=fdz$r9IylHmNblJdl&R0hYC{G?O` z&)mfH)S%SFl*+=BsWuD@%*vS|5hW46K32*3xq68pHF_1f1wh>l3^w)^1&PVosU-?Y zsp*+{wo31J?^jaDOtDo8H}y5}EpSfF$n>ZxN)4{^3rViZPPR-@vbR&PsjvbXkegbP zs8ErclUHn2VXFi-*9yo63F|8 zu_jo9udkJ7UU5lcUUI6Zi>(sS2))eA6f0LlXBRhPb8}!II~f|fIy;%0nm8I7n47y8 znwvTs7{K(pdBEge1Web=cS^1@FfcChba4!+ zxb^0`w}wlg%<+$s$CQ{|JZ!5r8Pg-FvZV1&HaG& z0~5=kzw6Fk?F_bw>{%@qpHW!Mek>p*IfmCwrJ&s5_adQUGxdw!Yg0sfBaW{z|5dOC$u+kO4&3}A?E4o=d#Wzp$Py$yy3$D diff --git a/static/img/header/icon-rancher.png~ b/static/img/header/icon-rancher.png~ new file mode 100644 index 0000000000000000000000000000000000000000..81812b7569ef2154f4dd0bc6a371149d6d68a779 GIT binary patch literal 1366 zcmV-c1*!UpP)oE4r0004QX+uL$X=7sm z04R}lk@WuX=XQVuFAzhIf#p2 z2mb+AmrTk-Pzhrywf|+r|^)EE^mhtt~j#f#M>uoQMwvZP0$@jXUtY?fYLxzW;p;b&qgiSN*G!)5wSD+f}XW0Ad?(c3IUj zyKue#?Xia=5 z{GR{-010qNS#tmYE+YT{E+YYWr9XB600WLmL_t(Y$DLJOXjDfO{?6S^ve}r#{E>}P zOEn}AQd4S-TJ)(Z=}T%Up_Gb-M1v2~2Wg=oG=1?&N}q&4P+G*+YJ^hypr$4GQ%EDl z5JRKuzsPP9O-wX@dw2Ivzqxvsn{_wIIxyV1XU>^(&Ud~u1Ed?3pDLy2mVN5`1si+5 zQ(5{e>$oef4ROre+V8GI!_9{{9gShF2(H>fZCTx{W|RHRo64IupzQ}$z*%o+vZZb<4867~D{e53dJtNoqe=J!M7cel&tk5o9!&S2%c9?39e5 zUt~v~55Kt&p(hQ3a!)POP`w?%e(Huer*g7z-m$SUU*U(L^VLYu%lz=KlADef_e1E? zfV4Hif4D{Fkm=a4SDj2HuWwQ9b~0o~EN;QG2o4rgGC6&2`IcG~669`A-EQ?mB#I9f zi-Nd3@EW+l!$OvYN=N?e1?Fh`jxdU_m*_-H1TsyrC?Dn5ev~oe)>)}wn8tF9SgjfN zze7yhm?2t;{N?pB>#3iw+6Kv!q5e}51rmhm9o_+*b%D|fjqg%j?VzcufQ;7L} z=$)QJMXIv1`xmr~@pqRDT<(-xyjxc=T0^CGwCw5V^SlV*A56(V1&>B@L7zH{$!WVp zaY+GvaRClhl%S8CvZ#}m4kWN-dtp2p#V&TuH!M#cznfN<;jkHlpSDkU{_iZIdEgU_ zv^GgRWjD1QQLFlfFfboMnfp!PWigwdhx&hilHui+i(R?%#VjtkpIE(S$^P5_F*bhr znarlV!JI|-DW=#BH~Fph($ literal 0 HcmV?d00001 From 2b961b5e84883c3a8123e0f28f23a63a3a92a546 Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 2 Nov 2023 17:52:58 -0400 Subject: [PATCH 15/65] rm'd temp file --- static/img/header/icon-rancher.png~ | Bin 1366 -> 0 bytes 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 static/img/header/icon-rancher.png~ diff --git a/static/img/header/icon-rancher.png~ b/static/img/header/icon-rancher.png~ deleted file mode 100644 index 81812b7569ef2154f4dd0bc6a371149d6d68a779..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 1366 zcmV-c1*!UpP)oE4r0004QX+uL$X=7sm z04R}lk@WuX=XQVuFAzhIf#p2 z2mb+AmrTk-Pzhrywf|+r|^)EE^mhtt~j#f#M>uoQMwvZP0$@jXUtY?fYLxzW;p;b&qgiSN*G!)5wSD+f}XW0Ad?(c3IUj zyKue#?Xia=5 z{GR{-010qNS#tmYE+YT{E+YYWr9XB600WLmL_t(Y$DLJOXjDfO{?6S^ve}r#{E>}P zOEn}AQd4S-TJ)(Z=}T%Up_Gb-M1v2~2Wg=oG=1?&N}q&4P+G*+YJ^hypr$4GQ%EDl z5JRKuzsPP9O-wX@dw2Ivzqxvsn{_wIIxyV1XU>^(&Ud~u1Ed?3pDLy2mVN5`1si+5 zQ(5{e>$oef4ROre+V8GI!_9{{9gShF2(H>fZCTx{W|RHRo64IupzQ}$z*%o+vZZb<4867~D{e53dJtNoqe=J!M7cel&tk5o9!&S2%c9?39e5 zUt~v~55Kt&p(hQ3a!)POP`w?%e(Huer*g7z-m$SUU*U(L^VLYu%lz=KlADef_e1E? zfV4Hif4D{Fkm=a4SDj2HuWwQ9b~0o~EN;QG2o4rgGC6&2`IcG~669`A-EQ?mB#I9f zi-Nd3@EW+l!$OvYN=N?e1?Fh`jxdU_m*_-H1TsyrC?Dn5ev~oe)>)}wn8tF9SgjfN zze7yhm?2t;{N?pB>#3iw+6Kv!q5e}51rmhm9o_+*b%D|fjqg%j?VzcufQ;7L} z=$)QJMXIv1`xmr~@pqRDT<(-xyjxc=T0^CGwCw5V^SlV*A56(V1&>B@L7zH{$!WVp zaY+GvaRClhl%S8CvZ#}m4kWN-dtp2p#V&TuH!M#cznfN<;jkHlpSDkU{_iZIdEgU_ zv^GgRWjD1QQLFlfFfboMnfp!PWigwdhx&hilHui+i(R?%#VjtkpIE(S$^P5_F*bhr znarlV!JI|-DW=#BH~Fph($ From 93f96435235fa9b382fc1d1bd06bac72626a43d3 Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Fri, 3 Nov 2023 10:26:13 -0400 Subject: [PATCH 16/65] Update src/css/custom.css Co-authored-by: Billy Tat --- src/css/custom.css | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/css/custom.css b/src/css/custom.css index 304d013fe7fc..06916d9f9698 100644 --- a/src/css/custom.css +++ b/src/css/custom.css @@ -229,9 +229,8 @@ a.btn.navbar__github::before { .navbar__rancher:before { mask: url(/static/img/header/icon-rancher.png) no-repeat 100% 100%; - mask-size: cover; + mask-size: contain; width: 35px; - padding-bottom: 7px; background-color: #2e68e9; } From cbb714df13a7ab5191c7e11596da95c9274d0bcd Mon Sep 17 00:00:00 2001 From: martyav Date: Fri, 3 Nov 2023 11:17:52 -0400 Subject: [PATCH 17/65] rm usused icons --- static/img/header/icon-kubewarden.png | Bin 9342 -> 0 bytes static/img/header/icon-longhorn.png | Bin 26769 -> 0 bytes static/img/header/icon-opni.png | Bin 39635 -> 0 bytes static/img/header/icon-rancher-desktop.png | Bin 8307 -> 0 bytes 4 files changed, 0 insertions(+), 0 deletions(-) delete mode 100644 static/img/header/icon-kubewarden.png delete mode 100644 static/img/header/icon-longhorn.png delete mode 100644 static/img/header/icon-opni.png delete mode 100644 static/img/header/icon-rancher-desktop.png diff --git a/static/img/header/icon-kubewarden.png b/static/img/header/icon-kubewarden.png deleted file mode 100644 index b1d124c9fedb94b57ce8b2a102daef8fd6c13b0b..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 9342 zcmeHLc{r5)*B_DWgd~c=WM5{Cb*#hKvTunj4Q3dOVa77nWEa^3N=C&-K37^V3E_k7Ozp8I?5I1?isMmjD!5D3Jm zr;9QJuCIW<0vc-I|0(GzF5vPq(BguR8QLH0MI>Nx9vHAsfENagA>*(h5cwnC1j$6n zN7Jx#RTW8L7o7OiJ?-b6-euvzuW~srj&n-$-)(m2{l<9*f?O#q$YiA`HY?_P;`KMtJNd@TS-UmkwsIcJ-NMqQF6qy^9Vx#loUUV}NT9kF*OdTQ zSh-0>zg2rb>KnySu8q8sKBqYdL=lD4)HKo4)cn&Uz>n;Z6cyc8bXxgD?fVdGDj(sWKTV!jxPKYb8KmRT3&qm zrk$DV@u@TIDh(TrQvTxi7}o_om1b9IVyu_p)BS33%J8Hod7u?3gYIOL3x*ElNJEw#mD@RA!x>?UmM|cW0yR zv6^<+C&@Q%gX6FO`0058iL4(P74^I+;tipeYivaEq z#8Uj=Llqx)75)o`CSXkh5d)Tyl#zr=XpwP#()_A)U}d5+7GZ|c{sRJdQsH;?@$o`P zN%{NxOZvkl2}Bnun4+Sh6jWMDT3Q0okRSzk`k=`Yo+N<-h+h~e49SUz^YXzFJi!N; zXh(vtj|x9OFb@86eRwZJ!@uA?Nq?{a@F7J;dr840p;CCf)bAD~A1yxsRF(_i*pzC@2hcbuK1Fdi5@ph^N}h5c@8;jJu85rc)JVG`0#7_5Yhf-_V?5$+@>fkva{V6x7#XeeCqHz++%k`LO` z33C7ifJ@>49IP`ICL^ohBq8G{1DBAI#X3nSDmr2%6y&gQ1sOSMv?Ch+8-y_t2V^DM zxoe@G50|qunCh> zRDeO@vQQbQ0!&)|*P0Kl&tr%rAQKNTVNgl9%%S-~SP+0S0I}$UJOuy_<$x~;O(F*E zLm*lZ2p%f@2N2)`%fE&Vf$ijs_Ccf2J{SNLDh)@#pa^NW1xyYhEsKE4NI>Ng(BJe4 z&Nytq|E7JgdBDoQLavJ=0rLkOihk`VbBy<|r(cgAxWlak1|Mz;1lsAB3na83#`$oa z0M@S|Cs(wm3kIkjf28Z5a@_xr3K#_$m>e9VC?PG21yTV9mytlrq2Usa&eF~b((jFJ1qLfq$g@Z+88k(M9*4b_(MOoPzv;W~q0At`%sss2%lnP@w&T-#d-> z(*Oyrm##Gl1Y!b4nOZ5@2}RC;@R*ODq1LfEx|1hQQMD~g&VxYg5Iq#qg8XqYCqUg& zYrMUg^yF*T2^334TWPGVckP=8w$I;pkLW*Fk(buLxK{a~TK?TwJIBelBQTP_rXgFO zf#x$46ADunj5*TJIYz0Mm#$#4(Q9L7V{3KnQTNvCt?V0$By2PiX|&C-j?HGIa%rSk zYU5n}bh;n(K%M1degZ6=*h3&iLj*CVH5%DGL|JjYl!!_}Jw6_BM-D;wsZ9%}O%r>T zK+0CjFhWHMyC+X3vEf2NcsAi(dzFFn2F0_J;hLyE!L=+6Xw%Dk)!PdcwY?U?61o?3 znnRCJ4AZF_RGwux%5)F~I6JK-N4@{{1nd6UBW!+Gr@;&rFQe8$YG;-fH|{=I zG~n9)4i6T*APO6O_DnTZ=s5B->!SOJv>`3`DVXqAqRd3}fnGh}m#OzrKl z8EGq99(rH~zI0KKsU`NfbkF1ciN5hk5oG70hZy><0VyueNQnU!_Ctma%Bw}JOMG&rJ3};m+j92?V&akrkTVUWL--BG(lzx2j{w7g^uZf5);|!!d=x_n4~B; zZAI9r`qVYgc(H{&Pt-uNFggl>Vve?50du`L7bcjfFGS;1g6X>?9G-k7S>q0OmfWKV z_fHF;u!Lj>Q@aZWNIoPD{Mhl24O?618Hxey5PbguSJ^Fh zj>2vZx8gfp$=%uDTE@+RXL=2jV9^>Mg&`Ku)VXPBsH?F&k<%3hf(ZbR6 z+tG_A9o((2Cg9}xG6p(P+PwWia^M~R&w@1#=aP6%zR+h)6?p}T<+(a>R(n}f&GlZ= zd`sk3!{!%XIs|RrPHes2w~-}+?AVr#mx!vG@1oI8_-BPqDpu5nYk%vpM!&MYZ7bTN z*C$us$t^$XRbkV*k$0qs7j@JzcFuU6^lg!&oPNG?d149#imHs;T~7*1-f@w6Hh)U@ zY^-+v%TI&7C-HLzkwyACIX^SYrUONd$3sCIiMiAj8D?WCU^j-$u)AAbO)STbwv?yc zw-1@#0&mF~57#qqB>tqZzo>shFT4nv)rUX7WvLwR=fN|$T~H4=FK0X+z+%0VgtRf$yI~kEgJgi4*CniJ z4nIk|)K?XSOuAm&{hIe?)6}QR?%Nus`gzIVDrjXDsuWi7EM&XxB1cc6wyd!?#{RB< zf385>yb|->!fA?82FNQl8a)0J7nF5z*7$B;2FSB~^f}xwDpmNUWNdLGO*tFywYE1D zdF?$@Z%f?P6nFED?g^<(n;3kYZB!9ko=XG2j0~VZR%ggfzftVuJE^U#=EEhW-xc^( zUTmYEJh3U4qwDZ-ol-eun1VB3So>^jR6N^AVd9g|9111ZCGQDMW)wFl??wJS9o%XyRDy7d<@Phajcpa6?n9uyb-aUU&@F>lUKWRDea+>razv4qN<#2d~V4xLs z{w#Q&Qd^j4%!&$n zA99LY(WAa6_>@>#`SZ~?UL!k;46K6cH(}5HlWh5IHJemyUuT_ z%ksfASR!hCtom4A{@j=4P0ydY>2T9JnJEFuT(zUG>-BWmSyA3!uryDg*j%KYRj9N0 zih$E`wui|JEe7HT8`0a{E_SaiEQ&fLC+hm-amnvS!DlOw~i_am4cTPwg+rsr4Dfk`3RP zq?%GuNC70n?QLHaD|d76YEY9)_0lm~M~GvjNcH8>dfp#-Z|Zp=v19qszNjxK5xx43 zcmBH?3!%y2z|U=I3$Y#-$`j=nC|yUHSy4$WS#*8Gd%Vingv$yemW+TQXQdJp~d+!VyJ2JG|xuIf$(FT6t(kHm*yv5Jls`@N;D5tN~1J!l}N4sJ~!h} zB5(Kay&54!me`k&Kbz86d}Ci8WU|+f$SR0W(^yPM<*p^Sq<&C#dvP%~(3g60R${8{ zyF9zwm5cQXu!;g?_e*_)Q<~wQse!n#f2`QFp)vMPag_u|NSx{c@ z)y(fLb*L!Ragi*+a4J(pV1}C(M0VS&y-Bxf6C60X*Gw32luXK8{M_*Gb}y*JJ*psQ z)Ix7jtEYhLky|-mwa1^qDCoC&`iY9_qASdbbT! zUS-?KUV5oEvEXuqX`c4$l%Um%kBQjXp2Tz{BsO!HcN8=5!#v{hU?vC4mU2*L%cQ$V!z}W~#~z1^Hi@bCZuLIE#vy`WY-0MI!mKEzLErV% zy~rfPtg86m_{t_!9mEK4ZAZ_bVjVTHeI#jQAfEvy&i~_s?@5v!t zus@eCXDZkxtfOgJw2EXdR#o*OH({kaD7m(Toh?}a5*uB~@w9Vyyk*GL$${Rc+Jo;M zNQ~L46UvHuB$H-d+x-0lGuO`hsh1_!q6(!>U2(?iHmW+jneI*+V|)CMq>4T}I`6@F$f7zovL+CF7O6V|zFWgHZ&eW~X*EW1}Y_O@YvxEsDWfH1OY$ifZ zoVw&Azvi6(I4!s`c4*7Tq``Eo@=WmgkoQGT72NkL3mu+b%b0{!6hV+{+R4N=cvp9y z{oSAO=62EF!?Q|vrtt$?PccxEYyb4rExAXUbIX|>g!J~aEAK|5#nu$${e4{TyNMaDHkk-18-6PG&Xw0HKy6=Tm4j(q*R4_9-Eyt8|QJED_>J(WP18hKklzJ9%31?i23SbJG)@;k^T| zhRchIGJdy9<9`wiF-EzL^5IWTeGT&Esc;?XT%Jw_Z);fFWSZ>^Z+v<1tx3y!Ai3JH z?B&G=dR@(PQ823$m!WG*NN#K91R-9 z?vlOrI^mxgvjx-;o)4vC9(qjWS+E-UbN9rTLy8+AB1Q(Y_k$8lz9?swONz!R(&p(* z>$K%sVE2JzUV32OxzZAE%yXi}>@2h8fb4nT(s+8U_-)07om`V88}oh-*x+};XffRv zS_Mhy509@A<-PnQ%Nri;X4RL&Li?__i#{S{AfkPhIfR)vW_sUq7I$%7#5cQpGzNHiSrtQWsPUfP=3IK$-a4^Wek)z^W#hOJ_gdg0 z^3%AzXf(jsh1QMl z%c}KKzrL(y*-=ahlC#F{FP;Q7qY_2{GG7nQ{9 z0&TOPTBg`DKAtWLq?_`s0Irrk9d_V9GbDC*@>>mv(6y%B;zA2U~t zpw0+EG)(|F$aP0>*Q$qevdA}&H&w2L{tPNX*w^wlW#L7uK5Q+>;juAGYUQ>CB~zBp zbA^S#2^lB8!D#+tC&2Dyw0iDhA$uIBFMR^-TL~VU>Gg z&MK@|Dd_nH&r?%3tNAv1T{C!kJIhWiJaNBvV{g^Ycgf~Vpuf&In^KnO4DfQa@Gu~9 z=72$7^*f~v3Vha>g2Ee(MfG#*6*m-wP2@RB&xOxYsullG!x|Ez2#SJUS+^?LX-ta@Vo z-cGmIJ{IXGBY>6(=nYD0qC(bZXd6Up)n%oqPcMF78ESMhTz*}nv|0A)cEVo&ohJbl zVBlDpy8qO=jQQ&I-X+ZB9d(-S*X);X(nfhPo-XiH^dyYw6HRDWWXgmXVm9v6zYko^ zD8PJAC)D!YkBfM6dS^pBrj2xIrvF^o{L}Rze`WV^f)6|y%s92%+4Z?$*6Es$D=O(m zyZ*HJI&NWWWFjc#Ch!_)A&$ejFJVN@yC8VT}iBadyI8bkZ0!pV9XD^ATrzq+w& z1$^se``;R3IXi<{?|EOS_0^hNzW%&0=jkl!lv0v@X-VJQw4TevUmR#-EM2nR<+7Sthb{hw-{xU1NZh+6+8$8jMVj zB;CUg#l)MB&3(x}Yie$Bg8hd5%_ouf9}FFlEa;@Pth9&~M0|e#`UwTwld4BBGg=Lq z-L=;e>UA$}3yE8o`7>uMh(!dD@AE;9(dLCNMMqCbHLK@do>&pLGj>ON98m*0m0lV} z=QnV$*$I82{Ps^PAHn2Mh2idQdy6vnj;FwB-(`vEDA3;7r+`1US~$mtrZj)H7@q#B z&juV-C@@U4c}jyjewyvZv#mB?OHNqx+$&sN5bM7SMkGo{Q1$hN(M7q+cJfYcrN!J( zLYy9W)fExW|E{`hr&L;lNu)+w85!=%032HCjJZqKy#?p_VjRr|7QgD935CSkJ5EeS zo-|%Gi-%lh%zh!ZG@J}*Y{US`yz{JOh%(N(&}WgbLIR^@_~#q6}{ z8(o$u%}ZFixx0IxR6~?KlvOytc_nBAC8O@qKBrrT?c@zGk^v&bd9!)#Gzh#rGQMUV zn+U$g@|h;Pbx<(6Kb|#LL!G7S=-sMpaIR@N{WBA!H#nABge@#TYf_zs(M%jfMjU)* zfEYeIl`70c)n`IaHE`6RGf$Y6st=T)L+2KyV3-1qy$?f&n}_pRwWdDvj%q=&B%R=` zq+~U^r~Flz{o5IKmhesgMLMM`Ix6#)87;3tu90d;WyrX#TT_93GFqfPUa_6)Dm@Ye z`nlh>5TpO)C`H&8x3mfhzX(T7PQ?yOOTiyeT8-dVvqY;SS+sYe9Z{Sj9}L2ZlWfVh zEFjKo^5kg+hjZTi3DuY;vPtmZR(q(M>(g&WnoURN=!G34`|6&vC=2s>jHNZpEAUKL zM5@w1e|V)uo0IZ%@aU7slVhGHr>&K8fe%DsuwYuQ09{VkTa*b>JlS3=Hwa!cMy?$F z9TgQ}kMJO6k=#fM%MwrZ&O#=mZ56Zr^Te2uZK3Rhq7v!(ZUt3pUqRMwW7V*I3rpJi zS84gCXJ0wt*(wG+x}r|NPqnjy>_qw_d5*U$45KAw$IS+42aAZ>eUG97b^>lfpM!hI zygd_bAb-$f&;t=E8VNJHJCT&Q1OeIv9bmYaf>y|kU2$+9#2r#jHWz#G9r%O@($g|R Jm1|rI|1U&_*Qo#i diff --git a/static/img/header/icon-longhorn.png b/static/img/header/icon-longhorn.png deleted file mode 100644 index 4bf52dd540586c26326f8d913d14a601cfb19891..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 26769 zcmeEu1yfy16D<%DI1nUga0wcMOK^90cXtSSu;2uDmxH?mcXtVH!6CS7Ao$zl-uu1( z@ak10Z5t5P4ZKBm`UpC@3f-32|XXC@2^TC@AQD_&30B_zn3rfFB@x zaScZ(D7p_XU(n!uJ{Kq`b0G;~0cEi6aR!{H(QLwVrn_UD^+^4eu37(da`sJSfOWO6 z%mRkEK@Y`3fXtfA!gKY0n;inlLI7E3tRyk2AhDC6*&WzA$|J^lB+S>evO`+pBj~Y3qItaWw{75IavV3LjtUtV)PMBhS-Bftk z^Q3?KW6wA)Pmvt4Ov;=N-UsUCDa+(%DlRGdPFSrT*2X5bZEhm;TVEn!R7X>T&A@qa zKyp~B?eIJIjcbX+^I#nxV~luFPWEli`3L`{`%DgpuAed4)1F335l<4|nc11m)F{cE z6(ubdCCMbbwLE4U(O~(ZUY-Dw=+qx7A48t=o@K~p6k8i}W{k?|To(>Xf)49hj;(kc zM~v#UQc-Zjq)F;h3*=I;G!WuK+9$6!9=*)2xWArF<;oGZ{^Z+mE<%%N)6Azh7kExW zBS9r)B2Y5N`ukbGRFxG;Q=(%nXZ`95JaZn?k5Ozm?e1Sia4kWKElN*E z9F_Dgzzgtk;>6m^_kVn)AT5Vg4R4Q+c|7J=m86>GlKen=!OQ(fi?3japhg5Fy{(|Gf{#m1TdUj+onra} zx(v!HcpeUA_2viP-2l&MD|b)(I}B7P=)aHeGMau!_XzKt=KcEh$XM%UtQFn!R}-O8 zdt0rlhuU;R^zrDH+nBT`&tvp05{7wU2o`qgfQ&88?WWxJdA&TUR!5g!KQR0LjNVkrq8sD z@K>EsFij-Up+ebIVLQy9`5q2XI(~;`U}s$O*Jq}y7@%FeF2B)zZjs}58PTiIc`GLM zQ_2tK-+{#P>aagOavI*h!ft&${q8+mXLsiwQ~oovoX}V%`R&O*$7MbJ@}Ig5o;GZw z1q!Vlp|+O|1fFO-n9>ohh!!qFw@q)o%k!O;G{|Uw95bEtuk*$`yM|TQ(G0zgTbh^! z`PYdy)dp-cpnuoACFvaD+NsV-q2WD`l*{m*HQ}2#XTi}y9x7r8ffelYzPm1eeq5J3 zeQ=fB5X9MJUYvVShmw2!kFuaq!O+mFEkY8!zmi(-L+#qGjnuq`v=uQ1F-UC+J|awQ zxOY74x!Ps5lG$Z(l1MVW#`^uQx%m8muiW(I#!LP7^}6u+EYb~E;+3{mdUg*C^DoBp zw{k3xNB(P#7S`U+xA67}3wr{N|9BcdaTqK+L%prpVea$AW0ana-7;9Q9tja0i~RG~ zb-ad1f;~R2Af3+|EIcJ|{*}@VZK9feVe;+|E8m|#T;l0(ZSoTE&?u&J(IfcWPSv~* z4{$kOL1H74p>O>DMhzc+19biIkeZtP;+v-}CA;#n?>HKEcI=l&&7t4Mnr|2C*8U`v zTD{LP)`9(k{%@!W7V;s$$ODa0*x1up1oZ#Dw$>p;M|OJbn$5i5{Nwv{S?O7$d)0{Y zZ^-EV&^f|c$E8Vh-$~DRdG{N(x8Rsj1AaWs?y2|2Tbhv^mHbx(KlLT)c45^?~hY&pLG}1r{#D$zLdA= z)eIP$d!O#Rdj3w*lc0#z434e))#v=L-DOF`lwpc9?@4fZMzHjr)G4aRF@Q$jjC{RM zd4802qEn0{Pp`}Qzpq=vl;MbTDOU#R(^g^OTA@*Rw{6r%MV%ZAsm|A-_ObpuEfmZE z{280!6e@OQqB>%t5<=W6C9>{SL)z2D*L1X2tn|h=|7wMQy$@Dhs4|gQ=1W)cUb=Gf zrypVwu9FsBmtDcE0=}Q9{x>6WG#&IDY8kn>l4ojN-_QwPF@xKKns=M&dU1tRv@Nd= z`nvo7V$de&@05%dVWsor(yerI$3mZ1Z{H>4Nm@NK{`kk|{(h$<@0ee=l2#b^ zsPe!5pQr=ymG#6CUyiT#j2}ZP(gweDl{^6PO#a{}*n3_zV8Ns35DkxyTAa zzet=p?zkD67l<_;hAo2opN7CNWsKrht-dT`qhUj>y+#h}?S)U1d~7Rz8SL+c|F`M_ zaVM^S-5ZO@`3yjf!dT&Q4?9n#aly1hK?L%Y|JhY10`wHFESXYIBlLe^VUPlP4mXdZKIWSJ-Xw6S#@q9cWQ1hysHu?r=l`aHf@gr9!WW$) zRLhW0o~X}gZZB-1%Ml6{K>Sa1C|C!6V(OUdMjsaOB!Y1{%d>m)poyWeXJo=>CnqP_P%G zKAX~}JiNNYDdhnJWOgo{{{J4lP_T-yv5NMl^`Z5MVv^K5u&@7<=I^LFaWI>?(zwj6 z)x{hF_xu0*H&ysb3V~*|w{ZXYAM^sM4?|1@^#tnweglsl{-1K7{{Nr<-&4!PZ~^)2 z#b{$DYKey!E7J1sVL8HQFoFv!q}U^`9PE6bgJO?@*H8@dC9j6fL0U^P%rt_0ho31%I-)t+ZVU?)Ahk=;hI}(QDeHQ==knYu zO@L&mLmHDOAZ_?s(8MMeUCkFrF@R*g5;cUrZ@;ZPc5lFRd8Y{9(P@UrrFNc(4yYSa zq&_){)GN}rNGV&Ykm3f%2kZ*wljvm`I_O(ZD?k6CNQHCxL`%0XK6nVj{&&3r!D36BH_RWM{A{O`zsLkH@c7*{GK38Wy@|cZ<%d(mmRjLte=fe_9>-Gk` zQT!;J7c_HrtEO{3Jr?G=rq!hZYm^86%QMhE58ZF|kO?NpBJfkK@*m}nhSj8 ze=`0`HkhkT1*Rmdz%lnt^3UxTZQt)ROH3&dL(Af$`PA>WPER@xfE@fDa$eCj&2l87 z2F6DSM3TD3GW#3=LgF9{?z?AQ*C80XQsoX4|$p2%%j zzU(rmGE}&TQihL9P-v*Fq?(OM+M>;To{zF^I;;KW$of9YdrXkbviU}=R&=`u0Zak!cS;qB&qwW2=OlBLmUQU1*jQ6{WKQ|5Dzjf9g_m;|L z>lv}6{!Pyo$U9Fp-FKV!R_yiidSAOTh?=)Azkji{YUzn8m;9EQ^Bb5Zc$dGjlI|GWTNL z+NFckn4jF?j~hrX6z9Slpy`E6aOt}3=218ys zTGeK8w(15z3htDU7X|-GVZ{Myxb_L|nB9|3q)uhsv`1USERgTtjGIWFrNYxoC+37r zY1lRX*#Ro;>tUnd^6!|~ZI`TkRU9+}8>|{ce9Nz4h=J|N9}D8{7San*C59&si5nquH=!OpW!=F0*4zx%TYf5sKJmhphBvqW&w z7P?cNW-gm+;#IW^ogD}gV{FK1`%S`GeMfzZTvyj%{h6RcM^X3S+r07d4|{UJaUDQ; zhEjVO5~pK&Vf?vt`DOG+W5tzZ$i1N|vR|OkFYq6c{@cpscOSHt9B5(XoSN zaSah&>WejNJ?YkST$BIj(3X|;SxdMYd%*1A5-|{QTzq6{kO|>Nfv1uFIXnAhHS!xG zdCH<@2a8OhGq(+0RX{vyN}DQ0Bl1gLs4MNW7IRhccA85mIR9bxX)=8$x9M0RCdsiE zlcJWV%uQ_|F2_qp>bPEFcc;LM6h(5exr|(uc!&p{dvI@#8xU#dZdaXA6tX{-k2g5u>Zn=`TF$?9y zY&1pevO^AqKN%vRBY;U=lLmle8hz5d_PAUwztX5ur1rxdP-8^%{2gO*4T0V<1`a-c_6rKv0Dnb%e2i`sb&aSJ; z#`UjltuaTu(JxQm_tx>vZnz!IHj-aBTbls-`Z0JQmT6uoh zy_LaJad)*yEcw#;Wo5_`I1-&TDI~eucyx#M2jYm;v913CRFs54DoQzkpbRqknnZ<= z@SjR||7Sdb_3ih% z>f#+LCyA$HA1q+X!Dy&Q(_GiQXsy5$ws9Nofpcw2!*@!_6HmVb2mQ!#)9KMB&6f*H zCQ*jb>>*{yrwxv59$lZAiLB=HsAL}#`?)~dI zqQ({YM`cTk3k*lJHDvnKwI_afTXc2y{XLQeebz#*Z1J~U^NQrbwz(V?>ARfo!lK)L zIX9S7SzpV6Ge~hG?u|dO!Ii17eNJ=~u48x+xRMa&K{n5I6Sm{SyrVbCa?gtyyy;EG z@T=3E5KY*^yJ5l9u9^R-pWFAII$McZgonl;;=Mi2i_%+tpgPcRSHuCCnY|s&ENrdv z>Yxp1>GK8x%Jb)n@{0x3T8a{<51hYf*=(DWzQe8y+rxetm?!g(u-UG8mA4pR+f5GH z8B*(U@bTy=q&OmI7bypKTvdsqfOU`hN#*BhW>JAq2jC&q{61@eSF+LBRP_( z)8X6=A-H<64xa}7oUm}S3)tgQCPQ}qC`zjvC@!e;=riDa7&MCwgbw)lV)0Q>nQ^8} zqPHvR;0QgfFgMxUrt*+oc7?FY?tlgbk)3X~n1;7W^Qz|tU!6>41!vAfzP{wkovA=b zVsHlQl{aeW{T(DYQ{FA;AlUIFI;? z$VyUno(j5C%1*a1l6^!>omNjahn*w;)t`KpwPj;8{oxk^`blov&?Jfm=>ZM=BXC@c ze}n2kyv;}+;|X-39knRWDGWhn>koj_@(TbR-&D}o`XPqIH}LXf{Os2>OwOTUAhg0$-09W~p-p03*Z%Fb{NFu7;d9 z7*lmDNtl!As^kvH==I+wx`p!K#Z)A}?ij3AGVl%6>t#ELVer z3@Y=DrtqCw2gs5zo#0L6_&fMp7)+;ig}Nfq+IY}|XLD*jIQgXYf}Z|d|>4kV_=5;Hq6L^BSJLU5OfF#+qgoOPRSavmS?lGRVuABe-D zz`$D_@zdS~Z64g6=%jW7o1#d@>ehCTeRsbfp*$OGoAAUtjQd)FxhG zMgY*sPgLlya2GAnt09PZCF1>`r?uxky06o0VlNvBHpe1grs1Sn#-FP%*-sCbSm%bQ z3x?9tRehtXlfmV3W(&u}e2F*7P#v3pbOuhjzuug4MXv^SziC$_QejDA4e7vbTapv)8wZbrr zt&69M9z*$?Tlfl+MWc}pJf7v;xXq|rhrgHlq$}pJ64N}k({@Goz_+gy__IZ+AqDw# z8u$R+j;VVhU?A%q(Vcf%@icEw4yVpz$o9JTpJ|{!)7Q~dpJ{3S)*yWq;Vg|D-q7M9 zR{tYhwjLZr!8XapjEH2(N*y0BF5wl70vooc=XiwzivTofP$RJ=9wmtGh zj?ygMKOU6A5fC7KE7h`$nrivJ)YNnyJeH*={rij$u`qMLK7OZCF?D3ni1Z0X8pxsW zGV5dZTaDt7uI5vm;DoYuIxb$X$Q|#XAY$4K5`EgVnT80cEe-m%PY*=BJx<;uTEj;Q zD!WypAAdPleN5#2k4kx ze?Btz7+XC$qNz^!fuRv*w$Ia`+{mM=H{ynz+az7n-hy~Y2gqY3rTJ|}E-LSr^fgAJ z8{4Yp9n63#EKXQ8;oWjy9o-oHdu^T`EsFbrg_!OLe z)&kJ@aKDW&E-9^7EA0(VWyy=aImsGEyG<8R8Zs$}p9 zmY5uhyl=qs>lR8|CMhVQ!#n;4*Ja5XSw*a8W%{39oG!hSQ4an%aU&n+BH&Z%@F#)A zvBOQ~c1y0b8&+$p6{Tt2`B90xIF5$=lBjZvjn6)qQLwM`ykTs%T2$m7vgmxg2|dO9~6 zk!l*cZxp+6sru1?n9JmAbWVH(+DML`;iDjLTX<81WFGd zSdn4d%SAOSdy0FF^yC^kMzK$a><`&UO!0A11E&! z=5m=IRwv#(O^g>QFNbp0p__Bqz6P*Neh6$L4+&`TT}>-Chq!1BqwcguM@RF* zf|1=9hfm^aZslIdzXb23`x2wMJ(PZy=QNe3?=p~+sToO83nrf#zrT1*^NP8~1wQV` zQ(!-^w_;&7If2ViigDJHSi2j7c{3R!J0G)gEOdWAmSd7t@5Xt~!HxH;5mV{o0774x z`QCWEivUog45C1z5q*(VPs~b2mDLtq+ZU12m@F&K8ypy6lmR-yF18lj>*jXsg1V@g z#l2`V2es@xEO2wbn!@S(l1PIu5N~%r3OO0GL}Z`fhkxkf^V3y-K)15pW|sC| z9hJu-RY316tyK({EDduglef2lbJJu~IvO7P*_{*Q^|7>LOc9chKp+qbhX6$!O#(wL z2T$9T$4x?`EBz#q#k!Y&M=RqobU7*0Rg8f+HDO}saIK&%-q9s3)%@9{2)c;E28P8o zh9Sj&fXv2$sonP#Q*M{rW$1u#KP_g5MKN|d8G+{QvMdDCndLn-^7!lW6UgKk0O%a| z+`Esb5@uLZ*h#r1%YG}+5$K-*9UlD-idF649*2sqvJmqlViX08pyjF9lbBnI&5CRQ=HcI&MO&m ztF5jwO%|3EZ`L4&_j3V?RTuI{Vx@3~jct5Qwx48sjhGc$SKV952|YAK)b+J_p+Pw! z8w>nREoR0vX5R@Gy{|rp;8(+i`(vzYmxtuxW`=#|_|KJ5SRfoZ6_?t;m*RMBTd6~%*ruA3y$usVrt0_WNY$ThBL?mNE2l=Y&5Q8A% zp9TUdDFdom=o4q-$JNWonc2*?#Wy4C%`6*Oh>A?)g3EiQ3XdG5;Q?79{EXVNIi$oK z)tCVkf;#jaMeAs&-?w4YLeE*_Ws>~&>K2dMZUP;>HE7pxVWw2~^aD6CW+v!;YEJ>g~iGtDQ~@XP&T-Dw*2yAgrq|)D}Ig+<2P!;E&E~g zsja=zqEg%{^{`iG0b70AZXQ&l8mug?3-Q$|$pSQD$?|3v7mPo`^JH;?7>1VSKk=ay zVBdxD(3JGu76lqAePn-$5l8sEKVAFFJPPRc(n^yvt`@Mdv7AOAn0I+!%g?%kZY}sA zn(MHMa^+GsliS=RN1*IiZ0K8{l#lfZ?!)+)xe-fCknMb&VN8dj##SdyJEkDD`W0;O z?&L!RY8FRnOnDi*{dE`^|$>n2sICV5DH9HEdb9o9j@N3U)= zCw3D;p6qa=G9?qF>?Nfj%MC(pV0uBS6fJ&VeGn%RVI)Mar+f}2X;2Rnv<|a1_30dj ztv=A$An)XJsH=rlK5zs~#Hqw0$2Gte+Ih+#$Sk@cpBl5a5o6jxv2?~DDBXofpdP1E z6jI7gN0rb|LfjgW$95P}pd;gwx=!ay(R?`aDQvf}S zgXXuaf}%n^!-ifB&@J#AvuB**Kv5va!Sx#b1f6Ohc~kZf<30IF1p7zpp;?*L84I>a zB>@LZCD-~n2bFGqKOdt}VN%$&y3eL$01rK5&R`T2T!Dke?|`%3&ia};W$6T!rGBX+ zHrJl|M4qux8Ms5vw40>Gazb7MwXZw^VF)QuqbiA~ZI8^Dl3ZuX3f%$TZ4L*HocWWY z{VuW+5A*@-iYGGiPAD?6zarHAK6+Mx zQCHF6_jY>%IA;oeLfa&tW2E0PDK<1G00f8fy5Mp4s@RV~VTxf~Pfv}40n*EEyDf|L zA)aAK+%P|A{PnNKgsKYP>79ZHSu`5w_7DrMN6EqjqE)2cklI9^k7s4Z{Xz6s0iQDS zq{8FsQzcPCqR*#3mX=o3n9%q62SV0jeLx62D zn8is?p+Wfp_^&9mZ#p`0R~H0R_Uv=xMUy|C06kHucemDvuk8tBqdeZldug!isb;s} z5gEBNLg9K~mxpE~=k@jstYV9iDsrWio8Vi{i(vC4Nt$vaq$RD)Y^P&5g0w+&MKxCA za%$HEdC424`A`~u;?J<=71h(}Cb>W6o7}RbAvshA-Am+D#0rP5a`Qt0e5ued)G#b0 zcl#($_E<|ZbK8A;eqCH_D!ZgHg1&Vvd*jD7ajTlTxED=@G#1>`D8-oLk{?GA9{mN4 zR23npt0X5r-Ls}VVL^I&|YY1z_v(+(Hn$)lubU<%y4xo(Ky-&Uk#De zH}Tt+V8g8Ge#}e8Zz0%j-`*c zCaYvHzB}CQWw$d^KTD1(Iw^FZYB1Oz^&Q9(H}#g`6QOrQcfP41DP}$ps^wWy=_o(u z2t{4A%c2v}S%wb9A7nN6(le%_{`91lqSnsNTj zcd9Wo6;hB+w3SR5@GJ`ybbZw2)uTU$UMZf`qNuUWUa_8-dnXzSB5M&O*m8v7&uT33%&mL8PMVe;?8$Zd?4 zkEsL4!YJObotS$hCK@)IQ}^qLLZS_|K$Mk2(9nHo>l`2J8_gAeonwWfQ+(pZ&J8oY z>ubd87Oq-zbMg%A429Bv*9l3uzsOe)$?KsPHAgIJF9{f zI?Du68RW>_O%>J4@-lZpRU@I;bV)68a`X8~YiFzDn2W^{5)UI8y+^i1f|{mthI|L2 z+~bzoz&AjF0ie&hSFa4Hn~ifmpkTm>e(4UFrF?e4Tb=i)|5X-&?B9tCUj`ret~s91 zmEJ>jN0O?pJ&v0O+4W|kJ{vgL><=CXtq0ptLp04TW(MbR!lW4`#jzNYFl7`>rYr3dB{^8N%Oxyjq>ReEp!)+jlrFzIRHQ41ly}w?E z#pz_lZy_q?EkLXTLq^3u7|*wI<>qrL=cd}vU6jg z2fhsL>}`{lMdV_iV**+jX&@*Qfs==GVFqXsBM$AMQb^+8?p+3fS_f+!=7VU2U${>| z4En4i?Wml)TBJtyZktxOJUL~>Nbd2;*Y)B%uZSjL#y@nvXaZ;){9u$DZTaA}uj`{*``uLE7Ceod7MrC#f$s6Q(!?S=Fxe%z#$FC;npPXnP zi>19act!Cc{}3VV1!1_c%YR5L3J#5>BPo4UO=gfLAPAqi{Nt+zkgAHk2j6J_k27gS z&hI#h$wH~q?-&;wmJ=xPabdAL_Wb1Z>3~ym)<)@qpMMR2_5if>Ld<8E7!qbKJ4M4- zym-EqKr0;#n<4(ODe~=U^Ce!xgW6LN8RYF>T27WSj1y^N>Q2`9gm$g}E?GWyA0j#- z6gMhi+cpH{>3H6|O>zGf%;P^Kp+Q<-O7G|OC39qVKH(uIS9o;4&i&f`Je3t2y*~>L z7SDe;nh0XSxc}%v5zvJr?>G-PTsh%!mNHi9V-SM|Ntaj|T;b#F^UV;^>iPlUV&0Xn z?-Kyt0qQeOf*NJj0*_@k75eQGV}PY~nclEqfQ}CcR-ZO+zHWbZ>b9hO1Nu8V@U}nH zDB}{iBAcTyZkI5A7u0EVku#Yk6A~A5%y*YW|CoJW^@y!s`LZv-!%qR4Sau*8Wo($# zu1VO%kH<#yVc_TrUiW2U-Lmc2;L&S-?7}-WM2^ZUiktnf=3Vb|8-q zrc`#qt?XOjs&?Dbf3BGH_i6!(KCXf0H?H&nxo;w5g@(9hx2P0`>JY1~umdE5q`OJU z+c9!~RDmSSW~6|gmU(S54AAeKyt664$91jeQPNdoRkm!9`I(+TS1y}2IFYM>^pBk& z!}lf{S^`Lt%h+b=b)*a{^64p;(UttOEx-Z)fD56|Utvqk-^&r^6IBvR=MOI%r$VJT z+;o|0QO~n4k{!;Cl@=cY17!VQu}yhYhR({!YHV{?YBOp`v10i^*~Dt|TTEh5*nH)k zAvxkTf5J->2cU_DQqDwAFvwg*xI|NKv0MZ@y>MCNP~>XwR;D9>9#WZV+-G5-=_1R% z4}!Aqf@7Q{IMf{TLZOuI|h5unPTk*%=dR7`CH@!cU+_Y4BZAMCiAV zqta$}KY@*hc(sW=(tIWwnARCHU`Xyj>$ zJJ_K~SV^^H%8F8_pM=@XFu6%?iMUZIZX3F&wW2S|?6djMX>tPl5MR&JCCsb+o~|BO z2N1RZKkFM7`79fiu$nb>V}uxW4Q`X|#=j94mYAPZm$;u4kCG)|cn9Oq$=`FK3)`sBMQhsRd`P?+h>~)MHew0+v zE)@lBPx2pZv|Ox9__TGkM4%fYzv8*6R=%w@ZEZva5Wxc2!IA9WF_gycKUaPcWLi4i zgxeAnBu4`vq87D&U4V;tETyqqAJ;Z$GL{l{$=SqmG>HwdJ5OJ_E-0vg?G^1QyrleB zOf1?@y3&0+uMvs}KGkWhVE&%9|r^-vo1=`ut(jCcc9o%6v?0s`n3q|?x; zs}dE`0O0(ZgXq;B0lx%IMJdyExW6JR+dxo6uRd|Ad*&&&)-W62ddnlkLj_3q;?iZ9 z5`}4yRCNd5%wNtKOFpq3W_2zGmD73*^-T43)yKQ$XV$Z{P>&;Qum0UT-{p5%+$6Y* zp@&(m#O3(d-GtFGJ+E3;czU>~LNpj{HdGwji{dE*gy9VZ9n8|Sc|qM|NwwP-Wl85{ z6Si6Kv^Q_D)ifzTO60LX=FX)b>k}V=vL^bMhG4BQ?0BfQb+C&Mzb|?gIrghQFARm| z>w-z4S6tt2enVJgxQzU!w?JRUS*A|jf8Zv0ZM)^nM$~Pbu`b+t90*C+_Af9W|JrTL zc%PAmhXT-ac;#0#=XE5sl8>F}T*n7Er z_d2^Ju&75{KJ#fTEjUI0QVaT7BPb<|*knx#0eZN^L@>2mhQ@O61HR#|NH#>v(_VBT zF<6{GD?(icX96Duq=wO}n|YG_72lzd61^JEB1ffQVi3$`^+loE%HYS>Z+Y<3SiQHS ztNX$J`7=?`(iAwWB)@(@w)kd$JQvehr9$xV!y?eh1K4KFMoMGFzUM9|`Id!zpCbrx zR`(AGmvK@MmsHy-AN&FP7L*ua=f`{vD=P@q4cqg<2v${3Fn*Oo#Vn?dOkLB$8Cxx0 zMqb`VZ)86$)85vCQ(3`^2*B6KY&sF}*1P3h05Wpl1@IHlz6-F09_?Kn!m3Z{fRc86XT`L`HnzRzz z(qjvXamnMIexbQkgoff?CaEtpXJuss^cAqawea%X!4|xzbU@pYeW2xA0}|kv)pnka z6Q8w!qP&zsPMq4?OsVTkd;x-`rQOO4^Ky~|wJ?e6&(?I)cPQ(U-EH?((e!PTLl(yp zy&|uh={(TgR5Yzx_NS%b@MTTT&WjLWF+_ZjOG+U}M5kHf6ywpQD?m5!5Io;m0kE*U zs)+H0`Xz90#a?lfgyP~>?|%viv6l7?YE6bh#Eewicw2u&QU6oY6sXYR`s&f5jFfsd z3UtyS6-&!bMYY3h7IZWhJvDxeurTB3#B#xmOnB9!GPflI&EM*gj*mG;R+y`4_Yi*y zU4ialOdfK5mT`LNieTY697_s^PSFmBdYZ{ zX8%>_K72rkp>%n4WCgxHxYaE1l^S>#iuShA3h0WdP0pDtwtL1W`Xg7H<(HLsijp`@ zjKYvHaBY{9$qiCy1}zLPITi*j6Kv}E7oMXbor0xa2gq~mfHXJ;E1+6JDf~Mx)C-<# z8rWM@GTZmO#JFEoj22H@DDd<1!<16eIbvx=sM2T*hnmU6V?;9jl$}tVXfMb(9*z2P{2sHV|Mla#*nHcmUCB6*~~uf z;%{bANW8zt$=8Rwm1q0aOvK_Sb1MUM@L8ej)6%it<8~5Q!lB8Yd(hz%CjH3S&^i@y z76#1wW63I(yQYiwUFADU-YkDV_(Q5HWrA&a&_o|9#d~Is4-5{K&-mBD=D)$@yKgxO z$+TeDTlwn?1bQ4~e7wsK-ntKQ@y^B`nr*rblULR@=B4cs9d_31Y)3+0Jp=;eUItSB zb#@qT4*9te+*Lmi193DW4D*LLnXnjhMxr$zfK0(ZKmY{;bQCXP!7|>A)$-Bbfdt z+x%e8`Y)BwBnG2QS&G<4O2o=l-y~>2IGjVAPirM(n z&!Z9#(R29~=2Dbwj%XuwV{|`$qlhO`-=#i3VL^I>;`Pe+H!AnVDsp1@wbO-D2-mKj=iFSAttTG?jK5r4|@ug`pCOjr??SeMN-ZU6-$$tI0S$shb zxVGL>mWdmA^cH*Ebt@am2+Dlg72=BblK1{3C5lc7^B!5b01>eOlisxIn%5;|hj&(D zfTW6)P3EWuho-6yZ`Gtyl7+<&@sLA&-g&1-4024oQ=HZ7h$9M)s!Zv@9%8q~C@N)|JW8M@e9?FmlTVQe^Z=A=%3HTQdiJ0ZD7TKmz>I%^*H zg=xDKUUjwH)BL0&rP)G$EGi*a-yL3L4S4RLB4<0_Z?v5CdQ65@z_(Ntf_&7GAaM~Zb) ztPw+Wb6tN-_eyE~FKXCZ!d!jG{6&-3RE)D*Oi#B=GkjPYgdUQ9^j4Rd?3xrKDM?nX zZtl9`wg9Gu0XT^PHbwo}Io$sysHA{tCF}ZjODlpMn89NmIjnF(NuN2GUj~nk@_H_4 zd#XUX;lmM1{J6}0=@(Jif}tGVckuA{-I!hMdqQ79_Nco*ekyMY8n4E#B^saZ*Dnq$ zD20&s3Y1qr5aApT^Gvi*rO zYxH#j@=5(XW83b-!Rwj}m5#cCWt)#|_a@tn+0Nv{oHUILIGCJndhPwp5SllDx*3mx zb&aEZLdWfgbbsHLFDP4sl$za=v<;=fVzMDC2OAiFZpq`?3diBPMSu(7qx4B!6drP- zqr~#>0lz@#ku^7gz(?2Uh)}NB4oz=5Ui?hGc3&W4lIo3%&A3R~8|{~u<;?u{?GJ^i zC3g4Zl+$I)R(3I%41OyRC~b5rH?v<0`S=M^kLA_rFSbMmaV{RBjBL+)a1+Mi+c=B; zAjcw!g4VyBVA~jrrq>^cOm}{(x|{@YnL68#jQVJJyDxr9l5qn%x*v}iw4r2%wX zW<2am0<-R%0GKb>BZnnH!X~4SIe%iYKsmz3=(%=y05`T;z^75i8m6_EktkTbZBq#; z=Ne#wrPh4rbdPnDl^IWQp`D9ODZ%=}LlM&?A9KHFPH89dQjxJULVI#CF!8VF5uztx zq#z|uHBkU8#4rwd1v4T8gXr;YLcELQABkMYxwqD0Y?(4rej=^$xxNfT6s9ve>o2Na z_A&qJ0-UX6^%!vgeJf(YOYzCFP$|f$7QKKEVipA&Xi_hw>EE{(FmT=aA?s%K5T^@A z|1Jk8J3Ed17@=SeNjqC8Ju55>cqrcXscNy74c!|T$EIuHqLht@zjlZ#H8aK&)J5!7 zH#c2{=da&I;vv+gg8gIeK7J-_MTW|TDVa8HW7bsT&?|2o?%-F`ikVsk@XmDHJia0m z9-ZJUKu(D_@tx44fcvuVd>9r`d6&yzVJss;T@ObDi_V1~g0&&nd=!50EOsM<*`!wI zPVI>}XjQNoRLkH&ZABKEkDI_ha2QQLR}mp(e@P~iC;_9Ggod#q<{#=%#vrP+K}y0p zA}4!`yjyx60@lu<9TBS40rTqF3|>2O_%jf!oApGdTgQn$2Fe0{J3$+Ky&KH0n0n-D zGd9_Caq-@x=7U}Fx`kT3vW(gFR{kc3``%S_X+l7VDBUjFcF_{}Ex_UhDk2->Mu8C6 zy?BG6!wZ()%`XNW%yk)~a2M|$hQH8?a@KwimBqwzvz9?_W|MGRi=|~UxRSwu>25a` zVMM(nL4mQ9l>u@vMJVfhZq4{x)t{Chmo?kyE-^E6L?f*jv9u9R2G3GF`K((78PQF8&okY?f9U<_hLYo5(R%nA;WgX{82{x$G=H-GTc6G#kbUPb zL9nh!$Xx1Y?G$Mm228D-?K-Dqy+|I4@SY}(Bju|exVy0|a+(L!`2C+^HCrN3dKhjC zugYT>p(9bU^6?Dommi&X{091#7&BicefA3LIEt z#?OBW%-M7FHfdkGYH$$jm{lel#y60=+Nc$y<(v+_WUiEPZrtCfT<3b@`1N}L_hSo3 z#iR=K3FW5HomcKrbdvbl^KevaGOun88BuxjQ<$4Xg-tmsUO7Q7lrz$tQZ3NVuo)|4 z_Jt>&U;(V1LJ@iwK%P~Q!BG>ZJGN?pSr=zA%hs@J4!2uunq_^2(HKzQUg>oQVi9*}WUXXW9Xt$m*&_R&2iBD~$?1$7erlEX9!vX?I^H3PQ{K9;S`HLLxf_OALX z%BWitBZ3GbrG!YQBi%LX&>aGT(g@NW(gGsgEgee35TY~^Lr8-RAR!@0N~hd2*7x0C z?q6`%njct;dEYs)&pvza^E}V#@#mR;!v2ze1Zx9AEU>^4MKX|8PcTCq)6Bi1R{%C4 z(^S^)B&isj4t?RFX}y6F@7B4yn|cEWklY%bpHmM`7;*?xWx1>!_OIGJw2JxhP?IXx zAO$SGSdW6YYKB{%ucdIM!Kpu5YAK`^J8b|8$z~&*ti0+-ybuz+zTHXY)DLWEZUihT^IkN zV~RKVI>7{WkZez_@Fj8i9JMC*(~9N|Y_-Ix&sT5=iwMaIrYHO>+$@w9rtjglg?mm+ zUS+XAX=>)ZZ&)kNOqqUHtv-J>E15LOli$JFRHMwX?N#4(Zw+#=X`7*_8b4uR_BYNu zsR)9@1w1Mt)q8Dla~?(nf8h>ZXRYpOa%H<%5@s9{5gIM2O1XoxPR)hJ##gvv9d(wZ zlYzPlDW*iCik2R?7qb3FVda1nZGxJL(SlB-nVrqLDP8#G6;}#o8 z@!vX0FbAgWA%H<+;0~EE;SpRaHh68o%HA+uUfMxMFcwfP$`W0w@sh2k89uPWOO}#r zWu-R%M9L(OS&WMP+Yhu}%73-M3~LWtsFrl(`O9f&4t@3$;j&R9pGQ3yy?9;hRF)@F zOI_!wfXN^A@>|UW`yE8W&^v3uF7L*9ILao5xx;7yFu_6MeNh8_mG6W+t70$`11yy~ z5+pj`3kKH>}5K7mByHu$;NjQ9RE8DL=a(#wEL=mH7EmdbhEem77TEaJksw z+!dvluSyfuXYqBB8pc`S<)yC=(Z{;9;-J5)KibAwON#?`>niH|wsnQhXw&5C z=qzKsAkSl+b=5EDfYh5fUu357We;h(tM1(0_d^yPqpX76Q!eU884j4E1Zil{Q*1Ke zG=ksC%5T)NSt%{SmwA(z^Cto9JjZdtC&KO825mXzG!s1)g?BF{FsqVmsSbLQxe5Q{ z_Z9pr>BXcqna*|k6(y4b8OZGvVLV-&;;G?k15Mj$&d)K~>9660=&pLByUP3V7A@_M z6k$erd30GC-eJUE(}!Mq6iP}ey+YN2aapBp0XdFgT?>{274=Md!iMmrmmmjWM~Cj0 zh;#(~^A?>xjh3Ifrp(HEC>n!$o;Gumqs4GyQR=Ppjs!EB6gS^5pYRor0ngT(kRN)8 z-UaehFlJxxrOXq8Whk3p@a8cY{X&JtqX2%VY-^b=<)sDDqP(q5t$lLXe$0Xp>qkM;(- zmPB(o*bq}(-pc8Y?+SdO?+tisaXCAHEkP;%k!0f)=k>9+745rV(_MVQlRerRGTg1X-=SMN~rGOt!4o|{8O_1qwgn@YFa`X>e2fXdX z!S~O6Baos_)EfjcJPuC@S*V?CErnP&EutB9$EV=Eo|cfNP;0Y2=oDCX2&J3*_)0LT z!qouYs(4j}X?#p5u@$=@Y1!TJM34te87aPnFovBqfJPa zSj5-F5&Oxm=AhFP>J94Y0aV=>y*ZtlrWxh7WNtEqQ@#MUuM}$j6aG}KiyY9hSU^Sd zg##uQ6>SSmT!o|sfx9{I69bR67W4fcYCB?++zq3Yx0AGGM?eJp^v2(vurxlTn|=L6 ziozc)TQClCG1K#Pdlp<(R)6`>4r6w3X8BN)S~~1mWkaoF{J{H1c1A{Rs~ZaJy#=53 zf~U6@uy37{%!sTbW9stqvl{hY-d%@1kYymzy|h zF`9gu*H*$`Mlz+$=Q9D{B;aNQP5ENV*nuN zX0j1eki}Z(&c4MVGEf1Ct^{2Q+|=!YO7STLs8+x8p) zQlmN|FeDr`+bUNJ&o|~uYfBt9wi9`;c%=KmJaUUHp?5mI`MGV+w?S12Dzq6ykiaU& zJ1KH-h-8;|@ScBT$L95^e=D@deBJ;en|2msLbms{yM{-3Pnv=^BSL%3z#%)H;%PBD zq5@ITdu9+7S$%cSh=vHEzI|=Wu>_}QkC24wZ&-J&24_};5JVZAV=L9WdnzoUdy7}C zi*K_vk@D_{JC|3`kZjexv9*0ZF8y8?&MyvDlQ77bV>PDTlaUZ1e(^5KSCd__@@wns z@rMSWSjB)>IYwI11xJRRq!U9)-Ll8O*Wf#sYir)sIA2H`uZqysqye-=YY zvBSo6UO73-6^{0~m8Z&RzX#!^0o4QQhxfp4o@G!;e-frF546?THpGVMDk*WWae+8M zpxm-f4l~i*$Fu6H_|I6Z@%Ie-dG!)x87m@luR7%>2?JA#Wf*i1=fP!M;c9NFE|;fh zl;Z3jA2W4oCpBXAgW;fKVIt-tbpcHM?(C0edjOA*V-}~XyoDM!brNZ~hHkb7y4h01 zKvwNivwDO7Brng=r<8Pe^J^uUwukVt#+v zcP>ehXXA32lygLB7*okAWKYx3VKC@)lUj;NNGc3Atcs&5GFl4i%H#)JlF1$^7^u1+ z>o4tp--H|2R|!|5jTW&J1hrX)%cpLb!4r^)PHQVhZIY?}l_#5^?~5728LUE>5z-v{ zVm@3Axl{bnS{aAtu$jn?O|#?@G~a0F)_fsk4HzW zZcA9Ga6`lOm*T*DNUScP?i0^pe<8oWTU#b`aF!l@3;rgrEcoDYEjUd7x-D@PhX^SK zVxK-F4R1SdiNaCv zP~7E6EGU<#?9^N3wnUWmRE|H__5RwGXfa@A$d7~;K|q}GwU}oYgZ9vB)vIXI_NE^? zio5N0o6Wkp0q@{MaXC~xAatRBkeelEOf=W7hwbUvPvu^As>uC3Q2(mLXQ$iti-l*$44Sa3rVx*+^r$|(e_#?^lTG?uddrm zm7-%-2i(h>p5JeSV^aZaIU^EUDqABx`r5pTjaTEnAZQ6Wvk4?8=fbR!hN|)0{Wj9< zyV8TTiIb`fLB6zO&=Z*`K-(CK!Ou?TU?S%!kJlPTgQ5DgS45vcd3^=|apx<)91Qs`q~BSBW8)eU~Q_K)vR+^(3}ZJhOrRVjdG z6>xkgH02;bt&^D6tG6ezK4;C^6=w9iVM=JUpw`2jHMRz<S3r*F4s(kzmsmjAfuzsI@_>5LbFT|i%e%D%sFe}ZV!}meOkvF69V5KC4%U)q zL)`C)2PvJ0n>;hz*FH8koEj`nO6R`|v);8u4Czg1p~v+-+Oz7-0I1(!C?RyUog`J# zXCaYuc=e+>W=I2Wv{Jv(RCGkOH3xcdr_qB8wF7Ep>Jay}XQ1wNvF{@2^?7Ya$WMv} zf+SZ)bh{x15+-QwTuaL$>j8ctM=UM3i-#^NH=X9=<7r5jmD%WaU{+4nH|_fx9QE!uUl7?m>=;e{Sb48cQgOXmQ9!j4iq@mpQNW;y zJKO0Rxi^vv-eX3ond)URGoeHMvjT)^b|nd<>LQy*zYEFaxhM#p3IIEls zaqeFzZ`^P8*;{CBY&3t~C2qi4UQ_>BBWZY(POCrdV!`v`eAMQeK1o!Ui%`7EGX{)v zC_0M@vf!_N;q|Urhu_kC9lf?)y3vFwD##5IAQ6G4>x;iU9{PnIdlB;jjiY0lPZlS%N=IA zM0htgmN%ymRy?*I0)PzY6tDCEZBjUT7(3ZHM50-Ya~C+54YJ0IA!>0D++ z1(vASD5rgnTEFRgzmlkm5jGF01Rl1Ks;$&Tvk7f%iY5Q3-LFC z9Nig<^ejD#^sJ)zwkG#Xr%-ygU7SY`EeK9tDVHyhMC|G(k|*(121&)$yE$vjY6NrM zoSt$bv^o4DCd}2u)@dBrz|bIyhc~=q%g%=Z;Q=Lg#syh5I$UHLMi}WqPK98Q3HJ&K zubu*PI-tR`x*rhXC1$95{wk7kAQF%En7!K{>+&hQQiBy z1Q^ZZ9mi5m$lDb`-uI?ncj+l%nUY*8w{_c(be2BzXAeyqa$3LYDP>>wNgBBTIPX`{ zc5cK{O$F(uw6BNKj)c^+72xV=G(;dZ)2)pS0rlF zuM>dx-ZOjvQ74hSO)jfv#m^y()F}Z;RQ_T&s}kM~T^wC#p;@-sq1c^aT%X_-M~hI=Da-@oMpZeHLE<|SG*3mL<|fe1#P(_q;3;5XnMSf(<_ z03i$ik>(Tt0jjCV6+Q?Wclzto1~K{b7eG+{|G)o#SpF|_fIZPQOUJ7kt8mgjD92)+*7nSUDHD{4tZtM8#qv%;GwKKf>Q!3O_&UUvjNH4^mClZCoav zjHm8!A1N#vQdVRBSrhTU-f{hg74hU<>nqvfg- zhc37_n*hXrzdu(PS||f+aNR_i6mFlkUlF`Hej!kT$ow$>eC0+3$JaWEH#Bju9^H>y|iu+9nWYS zkH8uvw^5rf%B!{i)WXP|&P}N`?muIq0)s*e_>l6#%k+gC))G2edcu(MkgvhG7=Ky> zQ^u`WrrqF{r!>7#2$Tco6*^Oo(xYhm8Chyu9{hU|Yg1r#mr33|X#;x2quc#QU17er zR60#)0pj*=&r!^FERVm*EhjRQYm9>}c_?h=IQ4eXv-8U! z$N%drd8GVmE8_&{C<7jv9AkBx@b#$loRyYQ+=M5}4I{?E$dCJO)!iE*}JSZ?{NN1yc= ztAtKRA-dq`j#ZcoK1oyM!t_Q|`bB^9MU!s6GW!j-=Y;|y8D=$54?L_y?DTgb(~@7jxDjuYn5(J(=Oe5#3l6vxHJ&XM5_v<-K;O$t!~ z&~~fVP^*9ArNbNTx*hxSh}9~jh6sFlJ~Y+Ub=~vJo3fvQ@}*Z8>brFZzf%641t1T8 zC69t!R@5t&uYIN_R;P+<{>ni84kGPhQ@s2%GEW^!$FZ(^eL>{!6XgE!(yE2Tc@#}b zlk?wJ_w>{HJ18TVD~U2LdPbw`m$tnJ9#g2$(+J%1sU!cp(qVo{_O<%1%NynAD+vas z8_~*{BRk3V9XMfZJCWtS+t@BZ5`IV>hW}rSw}jZx21YrLH+p~GYZyl*M%ta~*`=-LAL_D&##e_zPBI$~i<=!? z2-2xk3YH2VsptXV>7VuJr--**YiMphQnhv7{bokbX6gH>7>8&{F7QFCwhB2bz$m?SHT2((BHXof3K$oZu9)VP}afxL1V!rh>!$X57Wze{} zUs@^P^p6{cH9^!pJvC zk#4ftW`7rg1eT?)y_|f=5z|)F*QxUSk3OH6r2O~C_J-?+drTaQg4I91_~Cx`p(yb~ zuA9&Kayk6zgE2fvN5&PZpQ3--O?nkmA{x~G`J6gf0Nr!9{KL9*k4Ef}RBSkOsUdk~ zE+ga1JMZ~UyQ^EPjnS^_s>QsMn=iO&DSu1EVqnn|%~_X`s>E7PfB7*%b#a>NYd6!l z`rdkv!>TQiifZRl?`Gpqw@ml-#O4M&Zt5Ci1qT7eS0aCUPg@TWYI$hKjdR!iboOH9 z;pe7RE2&yz-`b4ihen9W6;RA-{F-rbyzde);A{}j*c!ByZkcI5bV6o)cIH!xg^JG9u@p(J{5E*|jg-B}U=7K$dcHnwslWLCq0>@f9S2;WQ=?2B2 zCu^{~OcTFH5G?vfq+US zu{J}0Q7O4Zz!lt8;_~mV5=($CNQjgds1WnNVJ5!ZF8FtEEIZJ)8;ZwtR<9eYO@Z-}Txo>nfw3obuc(u_{P(>{->xq9tb% z?UX6~{k|S>w99ZumXR&xj+0h6-^O?MoxNVpx*GH?DnQOzID}4^%mLiWvOpy=BseM1 z(7d&m(REG#`CfOxjboMu!iV(FjFa!EyQ9pHFa9WKf>F~~q)oW%W9ISlLh&4_UHlb< z4wVN^wUQ014I792o7PhrBU-+iwtJ>*ijl$X8S&uW8od*6Z_PZox8_MxW#Q9{yw8B% zP*qG4yO|7=3nBYEkg7EPqo&H-^u;e2vlA0TIpoF1zdI(CmRZ)>DvUIZ^S%?)t5xq4C|sRK|yDELc%iJ0A252k)k zHID*qisr*n?Q4%pI`q?W9jg)9&nxnJ1oZ!0yQ9#f;GN=a=xTL(G0vT&hg-sy3GNue Nkdsn|m)$pf@jn8By+Qy0 diff --git a/static/img/header/icon-opni.png b/static/img/header/icon-opni.png deleted file mode 100644 index adc634580e0ad1fca43393605f79ee0356111ce2..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 39635 zcmeFZ^LJ!j^FAC-FcWobbvUtY8xu{eiEZ1N*mfqiZB0C}ZQI|T=Xvh?^AEf~zH6O6 zSu1_ksl97g)voKR+8w4KCyoG*0}loUh9D^+q67v8K@J86J_!2>^dA8O0S(X}I9mw~ z2QV-?(!U>Y*Ft`0FtD^BNfE(st~zI#(0ZB+b3?{8WT~l~hz>IeZ3Wd#%zdra77LYY zO-#(?tBGy)O-zQr{DL{{0L*o?O9rHP-0k{?Yaa@#IPF4^lZT@I7H6mn>O zA#j`K8SJy}2Izt@r4N*pr5`hxwv9z}@wM5jxphMyBg z!s$fw+9~)n)=MPfTuPMC&)^%hE|d%PY0xkS8+px__wWEKGL0fu@%rXA4d(*dnfghR zCL^q|V8nuPbhY8}{dNY^Q!&ykBCLol9}$pH0$_hHg8+ey&<_g0z;xY|=%i3=FRK7V zp_-__XkxjgJl(9N1YvSJb)iI^N@W~UIGLSeoGp?dJZjrq?iAaXiNh+~jI|cowj7hD zbAj}pECP05lEi>g)1%%ajo50u?>|BWx@CQJ=QRqNeP%xJ=t4{>C-xF7j5@FrN6Ko8 z3uvThrM0q^;+iFkV!j6j%QE!#A2|-Dw$hS~TeQhpY0o+w3zX^m50l6_A=JRy#lh-x zLRf@O(GlIk{AUsZk5H?siOn27@x3h?*}s?)e*(fpkZqpz zG073jgx6~tE`IILt(i2HA90xtQ|A?8hYd@@Bh4dk>7E`E|3~oxyC`}V;eo0|j|2R2 z#5j~3L?<`BZmA#RscDT0I6uBhEe3|IS*kUZpK!M9za*7CTiiL2(Ur`c*S64-a{O1Z zpu3Q;FD(^Qjn*z2H%-YNoW<8DI@er1xj~a!*HkJN-bdrF#F*qjiS6_S=WNvnuNT4> zXCyqX)AF4_Cm>#3G@IamR!BfE5KL3ib-@3PeU>^tDH%WW#b9goceG?qiZC2iL2rSV z!=!&^Hrx827Hw`_00la?*1bX$x|QF5EC^&ot^xwoNnshnh-!m@yTmN#0DP- z&^`}S5u=dr4s%r=-wb|Os4siRiU1Sacmr#UP-Eak$cVro{$9vmAlm!;xb-vb4XOkm+HycHogdEPmPqHb89x~9H8)a=xW*P?r?bZXT0KEUn1%rr{ z^sS(1*K9njK4mY;(scF^3N9tXB83ZQI9i-`c??!_=C`t}lWX9%>N#XThWf8KaAa6W z@Kg~xr30$QiPpU9D$e*B?=Oy+5Ogx;7>zouPZeoHT1C0vxHB{%d3Apw{m&74}1GznM!odxy4{&)Det4P{g(y;a_q|HU|80W^T#SlG|)CNJ@ zMk^!YVQ(k6M&KL-{MP{lERpqoc7OL{=|e5YZ^v#J>J{10r_p}7uJj2zHAu7=K9a@d z!2I7#1&9V=n;Rg^KFuQ*5Qp>PcZ!38k?jaur}kTQ!wO>*Gt|yslm4GJYhbH?=$#4R zY%9(H2wHqyfimWuHl4K&7586&ySNtcmmdE<$_rQRtcM`@eS0!123tAhmi?&so7apZ za|MQ*65#~+|GiTeSr4J-yFbf7f6dgXTFTVC-qAhMiM7+0=Iac)$HYiz)UJQE@JI_j zk6cbBp}g9oFMdB{+SO&LvEik3QkqrgCeMWUk8p70%S12>xOub!TDg146!p8tT~NGu z7sjKwYCxid`Hz$T$JO~!^<-;yb_O+RAm?|z^p}bU3iREEo*?gvAT!W(Fkt>!G0?^& z?aLa}InC3iFFN$31BWW5nbG`mLLPojEi|a-V*y#LL~Q>ND$!V1{tF zUVS=epwv}=25@M~2h0PsB>(kxKj0>bxcWR+0V~~|L<&|zQslB|MRNSF9`OGgp&x|+ zPC)%2p@c)hN9*;5y38{BZAOkwjf~*_X7?nd`M)s$IZWNKdY)+9ZreKuE+oQAYss#X zLk^Ff1u7nnj^H27_;sV|{p|T3z&cbVJonLSkY~XnbFw_+1H9gX@&+i7{xjk357_E? zy>Y>1!_h1)Z(^BaAImm%>R68iBN*F%XDZN52vN={;qlrFcWH%&&SFRRk1XFwC=MDp3#1vMI&V@LcNYt! zxG*CnL~s9pV+R| zrKp;XIQA)nlwBNqaEyN>fx!kwg8h99s$|5jZN9bsOoyt?P{*=yUZf^=JsGmf+?(j-qHPQ zAke!OgIG2XIo)yI@g#>9@|~=i@86xWSJ0Og)&gY)n5tx0RXDt8G9qrCVJic@{O@{V zk?8k+4`h)nSuCyF3$@&aQkSvS|8J56x(5dMY6irB-XBNB=A0@`X4fT=iW2eO7O7BuX4e+TrbK?EcI&#EAZ{`<`zlaanCJn-MJt^eKjz#tf9d>ddb05#*g z5;*<`7GHGXvAF&>qrhx13zT`F0dr73Cb6CFqX`u6KS3ddoB?PcOG54kz?{WnaQ$Zl z;DX3{vi@fr{weT|5dTCQUpRr3O|E|<3i20lSkxXFXfB9;=fq{=q|tjMI-~o zjD86Fk6yt7;VSb8zvYnr>r%+%&7b~NGL$e_uLy&(U(~;90}03Z|Aiy} zf39{VB9K-?1tEm{3<=%^Kk%PkvV725f>%HA*xlEs4kh`n!y{f==qib(~#ujn|c!jvfiFqt)?Oh4pl8zuvMWvTTRt!>{iq1{nL@^*_;F&P6T6(uB$ zPGg)v)?0NckqP3KaU*#~iwf*Nyd`6t1qJWd^093nx3g~tv7O31j*6c^ee#VHuhm=60t>Wo=i=;VbsX2m|AqHNG7%^ z9&EaF2CYIVs6R(atuc(uKs-ZzH(?l9sX*+=sbvS`>K6ujYsn@5`C~ESqMm84oZa9H z9U*El#%f0EhLK6T^24!U>C3%L*?N}M+=|kX$?Bo{ntu>26ThZTa0PR9bD%l-G$Jfi z6j-3-rg|PZGAsc&0FSPbDAE(iR3kxy9s#t6`(QC=+QtRge`{mWxMF=(PAs*IFDc{# zkiOYBl*<>z9T(b%Wm_=%5fa>GzU<;XEgqSSOMFfDSxf8NM_ZAN5=o!Vk|H6Svvvfg zKhwHBBhI=CC$~g;%Q4)he3m@@mC2SI=pBAotXXg+{u=`WO9O&G>=z0YgbY|Ouvp1Gs#w`Aip2PxBI{O-QOt&K^c|K6j6ueh7>=rvqLtLjW}`EMhKu`T zoYx*NBFX)hMf-*0UPwzMclm`3C9t^w3VpV+z5+3Dk>5oYnG-9BnHpAQFk-YdP?a%^Wn?sD zq*++6O2XNK#Hr2u~s;4StR8v*(92K8ZnMEW> zUiV=2kvwii!S;PWru6CDLe%_Z8J4;7jBz`Mi1*9t&*ZVq)FkOu-tugmztxz3_IF2y zUeL0LEC%Mb^#V5=!#P~jDl z*$UbG#ARWB7A>geSx<_cEswZ!+oU{LG}TR$V>Erk3;Wu9Y(WnCC=S-!&+ko%pARLr z6Vz-Nq5xL?_^pu4(DOR3{<=S#iK2iem%pmWmB?1x%=J`X#oo@Z%SeiJ)$TOU!b9cgg}N2OqrIbN5+a0IR;3g0qno4;LK11 zk}-adKO@mjmrK!2ojQHCsE@?oc_ae|WSxF%i{!^M3^FK5n2DPNr?qICI4Z;D=`XJ% zzp*BS*E)!*|4Tk}GeaPgXW8{?)JW+8tFeawTTOcKI`i?&yk~W{qq}6B@yZyb%-_~( z!awc26Z*wsea)XwaEatCDvvr_e(Gp*3URVL=T7zS+H3B+CYi#e!$ro52MZiY3AGiX z0oV?)*2No(&|xS}r1V6N?uwF$?{=gpl`rMUMPK|BCC92efRdY({r*MG-O~O=>qqEX&MqBfBzq6Jn zZ2z9AnpDqglqn!gvuaDM+zz(Hg#Q+@a_$avmWWIv5UuMX)j5Ix>A4JW~d)1>R;a=(1&Pba=KGlD|^Oq8`nN zX|`Tvg#P%Qo_STGMOG(v62nM`{KyIp*u;Cha7!O~zi7PUr8s{t+WVl?oy2pAA^=dT zZ}+gGhW(wJ-Mm5br1j@8uh6)|Fj1h0_b;Bx%bN0$k_aOyDyFBu13M9>5VR)#M7a1n zYO=NQVY~|TD)(?2kA&IL+zTpGT!W3FXEK1|P6$KTucPp-iMXJCP6OhR^w`$w>&M;@ zs&B6(qnD$MGk?PwGPqm&A2I$;r{&BI#lx4Uwl4F|8bi$wMZ^eUpdU2!fg%&$UNI6BTct z@l;Ll8gZh>kxT-UL5YLwQ~sS73f2{IEG1c9dunRC(G4tITu>gj?-x*r!iBi)c+$u@ zeQ$mGFbz8}Ee?W$r=eWe>u(_jg4RoD(R36NqTR^QB@AePO`Ix<5r-;H18j0rz$gaI z<1`-ZGun4V?bVis3zxP|)7Dg|(O5ZLuLUF9xB+8Q8jwl%rbVZm|BL{f*g4N-l$4bH zB?xOTH%}kTwDI_Wj9w~wmwrHoYp?Gz5XO=>qZ7;^dZDG$qu4s@rHw)mFF~MIOuG{m zWY3xSQlCP7xfyUoL#62c{E-1Mblw2q#WWqH^L8HyJ|jaxkV5Q1F8_GtgiCNJL9er-YwgsV|KtW#}}GBUjY+K9#n*irfUPhH zap*7-$#a>n(pm%A-=2ZHzg#Gc2G}Lg0Eh|)d|J-Qs~{+fYso4z)qquezkrChs?2Mf zQnQfJ4Trjfgoxr4;L$PYtf;aY)3;*)+gMh)`}~HPO{Hb)+YMv);~#epg?qfESGo-Dn`7ofv9m8IRl>1I@yd- z$ulL#GGxN+zQ&TRqtst26@pwqaQV^dPPuLA%IeOA)~ATbiXFFA=BN*R8LG3Ku~w^G zN*OVMF!953fDu~0WHy1fb6bu|>Q_&Yi~2&!1z<^v>!T zOx~2AuQrWw8{-s`u55~R6}*MbC6!cio~aV0@xCo|fhgdwwZJveiC(qXJ0Rjag&zrz z7E++3o?9Lp2}5gEIwzysO#<_%^*(Okp?MC7H48O@xHV9ee)|wUX+1}3^iqEia7nbr zu%X465l+3}wL^hH>)v( zB0lL_WB^nxWv({c#8UFl4`y2IE;Q=baf{X{&70n+rt!CVw8zXxaeO}V_bcHeU6jqU zuGjrqvs0D-NW>GChF2Oix$$4d*dCi z@j80Kk77G<%yvAf;~*F#ueH|BmyGPJ^?N@R8+o)iXsQG}h@~V3OgKnRp+D$a5T{0i z6BSFP6K+0Po=e0&>GoVAlD4t=&dA8{;Lnvdg%_`8uWJV$U>l#s^85npBic(GuaC%( z0>39jA~N|n??2Sh;eUt&ejr-f-uWC8%eFg6Z!=vkU9vC09qC;IF5u2RghLVKo>SSG-_j+ElU8x_$RCzvmM*T}g*B$K zZI|pm-?|WNz8=%}VA1g7w6Hb+?GFApjJ`)3#CeiETYT{k-6mdKwR@f z`C$@-U&8goql~)%Dy4ZogQ7c?gkd_ks&T+5g2UzYkC)En9;#u^-Is3yDUTC$s&e zgoH~8!VVR+$M{EV`Mg0ps*As!InVfF+;MyTgUU%1_Y;Q3BvQU0jLb1rkO(eIBhk)= zQ#3Ws%1e9Jwo6yW-o;%AC6g$w=gY)74Hi>s9zGvO+Wui&@N!pt@~^C#TjBfF)?{tB zLP%NP#-ZLx(e5Y`buV9xL2(X%ID9k;s&sQYpdI^Yxr-o1^onMM%PxX$v-j-92nbqC z4$N4Ii(qk@ovdi?yzHhCu|~f%*2AtYTBMyYN?*?^x?oE_T8TuZTEzAqD&#nn2H`zyN_T02HJtx=gmG*m`;|*Z+sei$pmUOI*>QQ4_e zF_2wk6f`L}-H}Q$=e6d0T!)mqmOBoED+7Nfe^E@BIq%1ttNe3az)Cfc_7e@QJiMG5 z|B5PHM@xE(Orfd%%<`Jo&7gWMrcag82eGkUu@m=Z`e8Ctw$mCs;`CTP`<_O6W4iys zMz_g3XGx)FLliFy1*;)NAjNg?>;43eY^%5p51}k~*Y?H8Q`%rk0Ff{F%5OwyGKKUN zHD24CN>P681{OSWu3bx(sy-<$Qb4q-3O^;em=olZWhuW&!XM_*?17-Z57)jQB;tJuDKTaJ z{a&dFW#Gr0QjiY`4U{&z466Dli2j7FB0aSbkPCeVZN6(dZk9dFyQes<9X`E%C^G~# z%G7&XlxJnw961&t_o4Jvnw;^kBG>Anm+hBaUU-E%Nc*+Y5-@l8RCHGqa80qJ_?eIGxz~NRg4b^E+`Y*>?X>ZBCu7@bmZ(ht+10 zR8-}K$@}i)C!{Ib<%h_jqQXQ zZSkl&2yCIyHmk4xj2C-2o2gZNJe-@fl|3J{Hlcgeol4xKsirln4)@7YNv`I}#684U zsZn1ay0R-HldoTeFW0#piIGI$rn%Z5<;Tm@cWA~Ev5TJR_3<#=__B^n>8CBWTDYs@JAb zla}3BtNr0{UVd9uOHoKS9n&Sn?N^30-2$k zlQ3eh=~-rx6ZT+8^!ug%(ceV8Z%vabJ6W( z`-5n*$Oc0nFG;OPSX*g*h>PVpZ)mwtiE|o+}_gHkr%y6*xbXOV0UZeVG>QWI4HkG zTj8>GsNB@&j%A$b6s=CqglMJ!hwp$SPT;3f-DP>ryh&Bok0V%7$Zz0;GNE4s8Q>Ug ziCGV#hy~Y4pB$Jv3nMdpfborb2H>z^@l{^LldlDSY&(Dc;x!+qzEafN%=0zo^W#NF zgtE{oqkwPLfAY51S!wNt&W@G})BCfvby+`Xi8UM2MMjg|>X`YYN0n^~*Z3d>k0pPr zNYjfDP676V@2bw+TfGb~uJBSK2cA0-Ys)z}?5n2)r>(0Ik9WT0J}8^M1u&p2M+W}l z$4d{~kRi`fwcjY1l{zOb7Vrp7i+cXyZsAy$=5x#w?N5VWVKpP~9D}t*hSM0?ZN_R} zl${-se>1SMZ$|g^#e2z7SPDf$X?^jIcVOC;qJhOOUY#U3pgCJvnqKT~_rqa1 zDjL_i^0|IXwRU$D0&vqm>HVafk%#@zcQxnX53)k$h)2Nbrc^A8hlvi)`6OA$zBlp} zB%m8i-Ph6Tgu7N>^dJbWOYhTOnqAj<4-<*+2%G@sYA4m{Jt?s?e3<+49&;E7$(XgU z6>{kJRnp^bjHkuj^TJ{D4E%xZz7$r4?jTzFRnvx>)MT^rD0b(7R>I*y*yc=PnEiP# z7#_Qw(j2xI7*6g`o*PNNF_w`~`(s@to-Q~27tytJZF8suWV!u)_|%I1jAy&|^CfDFW%J}jS~6dB)cfB&f) zwJG8t8{Di%duzC#L;8KV_{RnBtf(bWs5{vqD|L&9eo+$jt?pwC<=jj%cMUV`ne5Dj zrQ(z)KxlVi$~vCi{zrjVrfOO})qFFQLHp~rh@G6=;oC3T z!ImL{lt#N-9kZH%TS-Q$O; z^>ACln~`%)k=`9#)MqyTIPtL{WIM5x^eE8k#gvzbNAK@gA(w5}Z&Hb-rM0NZ zwR3n#EmITUD-L@x-D}jY%sXX^d6TLsm_A}B#H#i8Oz7dGT$cy6*tEBW(38%!^K zYlv8xu-2M(V+-otPDc=MjjT>0dk{agg4L0Yr(>v_vTRz4bz56=#i`x4;XeBo64u+DR? z;3&mWun8P4y**iL%Nn4nsRk9CL;SW=y*82r~%PH261e5VoD~l^#%8 z+yiT61T$QN>9!~Q8N!;WaNGJkQFqCLuP88hnKJ>Gv8V(dtvi>#0uOX&f_Ks@^e@NH z*>u%7`0(xpdL@I9)*QxPAn=BNCcRVggi9+Ut(tWkXmL z!C<0v=*?*&+|M3kG-wMwaByIHbkhLj{TRafHCE)aIVlsq(K*X@*{(XF%F}p%dIx67 z+Ac)B3y<`iU!W!4sgL>o+@^?e3MBOx3*6~4G3?jqC1PlOo3pkXoMm-^exarHaJ*4V z975-dWuuw)UZ}e<70V4 zVI=6r>fyfJ1fZ&QhjD{f=nw!^Twxsz+Rrd9+H+Y}RBqmRj@s*M3uxDqq8n~B=bbYp zVl`|Ib6;6+_oe1h9;5r0S&0&^T3wu{Lp*!8kCUA;#31(SBg3d7j&Qt~T*RJ}0jYsJ zgSNf`r+;aw+m?NxXT5U4VNAxPi7wx}l&qMiS^*MO%3CpQm-x)``$iI9^$`c=({{HI zKxX;Z%2x#zpfa`(Z6c+2R{*X*vITrVl%jWHiWv#YMup$hgA`5I5j|+g0Yk1@? z2zytVL~UgAOJnqt5g{Nv=<OFk|n;Er<(! zOp7C527v{J(f{ey3XZ>|qAcLikgOzdaqKzKKaV0xCp=}rQ1My2ejt0WYX4ncSGp-o z0&1oz<{`S|Cq#=a*bkYH_`YzL(fQt>T@}e)xf#}P$-156+KpT+g7LQ|nnsUu(XMDR z@PKE#7kG1XVDyHpzlP-QLe(T@u$DK^@8xr|oF3VcWb_yVh0loti|&U)h<1`A+p!ZL zY=N8XqSd|={|qyVvCA)?yMmB={An57PY=YP!LehNjRd8&22`ec{tz>U2Lzn; zB>i5iO`6j-p>KCsqkX#zK#l1m_r)YA1)%wD**@5j)IyQlxOzR1B8+j-mFh0v3-q*? zBY|j9`zu)U#wl!F5Ci@wMEk|q^+0yNKfwt`hxh4p*FP|r!BMbU)>-9_@Op(WpP`6C);4}W`FW?v(8EYwlkGqNT)*kF8KZXi#wVNp{Nbrmxg|Qj}BS5TI z^RUNs<(MF+&AyShOE+lqnK2r4@la$bm7jA-g)il^YEONYsLWxQV?vA+YdZc#!2AgY zqKl|I_nhhz5~0Xbt&%bOCT$nx(c1g;dnmRK1ACIs*ry*Y5MaL zw2u2U*2%Itxz&9i`gu^2tOaMHee6R&e_rDKx%-Z%(PI0;ckNW+FELx%h`>v6@!t4U zS^MNxC9xMugu9|9w)>NyS0u+_-eQViH!-ZS?``3yVKGi@vPOs z&R1qb@YfUlI*6%VlH0*vv~CRLfxgxqX{AI^oeIu z*a&4SQ`^4bTfwO6eJq$pGB@p8S>)Bo*tet6U=Z0gi?*%%#a z(&|V0CSrN#-Kgy-Thj$gpA6h9(FEl)K3TkEcbZaj6|~FBdE9cLdy1=H-sjeihueZ{1t?;1B2c&$sMT=b1xI zo8^{g9P~by@wn zyGI*HJFH!ue9cS~M?r*DuduPd@6qjGWx-Vf++Vz}vi=&;SyUj(bUPSB0 zYpl>Hl`FW@kUiGnR8vRAKVFas3KZ*wz}(8oSCatW^HQA5_Ogcv)ox$a55#XSs1iJ{ z8b9XOI*wDVFt4O*Md-rzw>~*qT5q5%f1j`!q8fl}la3ztVg=RNLL{GX1P3~np4@is z0?vwVw}l;!2O)AKV$u&Y3SY5L88DRQ*W!L zT% zzE!8S*$Ons3O``2z$f%KAQ*MRd9#q@RJt*m%FRl?4&f_58l6Yh)9;yYS<%~QtayQ< zZD;tKmZ_Op@epJIDyqYu@!K7Z<)s_I*=Qd7`+ON!(;cBPGl$V)<{pA%BL@Qa&^(=G z+b@d{XK=5F_o~Ou(+>%wxD=QAhbZ6#;wO-Y=qcP)%dKZo6*YCJW0DQ(I84lgbk=ly zWR+Y~f{lcYL{H>QNL&W4`K7kF=uJe!BzrVF{i#NV7{y54e!@_st@PZPdy{-vB(40B zk(S(plsF=}R(=%o)of#xJ3EQ|eWtec<(rf>_5mW>Cw8<+I-;gKk>8pTH-~Kx*6eU- z&vfgRpvv`SIYb6DM9<#NwYhyk$2W44Kdv>DHdz0gj2@qzSywrF_@ugi>Ev+WS_h34W31tP#yS-3VG~^y9&gZwc z1Gg6uBLD|HXp9%}q51VbjZ`AK@Pl|Bb6#RU7dMwgXFe~As;?YGFAkFy-Kj6cV4o9^ zl3^nE_ylg6U_nks%bp|}HfO44<>~Ms{55OwK1?%VQ+6SLY_wx6@?q3&MnM9R7qpfJ3@!w|KaV`Ps<#Y^-wISM!sK) zvqarjZtE(c2dwAy!93!A0{3A*;x4_fW|4#R^9A@00CPBoDo=fq;;wd;m9XOZ$avct zU}!pgc6KsXUXZq%8m6@f4QC_Hm^n=1M2L$_yIZPcxjJs;MuTQ>;6x~Ygt>y~y^EsI z>M*76T$>kQoV(UYjvG{Ubo5v4G8!)Yt5@0Z{rzODR;-+yzv-isxXat*AoXp=VsRd|n0`A7+n&Ty6wmkvAlzw&A1 zW_m3JcH!Uq%tp&sNvy-3gt>fK(?2syP3_EwnRU72ex4X~6(PHL+_L-trMi!kQ~5Z- zp+89={o)+UEWLb~6{70hoU}?9tDK0*>15~I>j_$OX z$moTvTRLJkK!j({?%eBG7SCw^>Gp>zu@fHeo^LT=I6bw~^gXi6{A`p)V*%3m06pqWFiNMZ>JVWVq@%Dz=8lVjb7Ymvd z8%kP#k|tQZm9hD0P4XlJTWkn%F%n8I`4M^ZwUS;l<|Ft}9^&Fsr?u%Oq6pV8(DVE**cexBO5c|QI` zB*oKizbegcq(7XNQ0v`;Ux(-t5eLiD>T*AspGmOrN)pLKYocVGp}L%y|9LMa9BJa| z(WPmEhr~y2aG|~;<<&-?_bP=yH%{qxuXEXNv=GVaIUHj5E%DsU)!FqEGG>}dhPNFC z9UWngn{uFyKibwc|LkN{b5FW`tT$Z{K>qP)>8#2UlwvA_%HI=9zIKtadXBiZBPg5~ z1l~81q{`u_kmo=yXZlzE06faVgo8r@oyHYrrmq(g`FOP4dpN1|8WC^D>PlHf@0%w@ zS)5yedBx(KFDm*87(dufXGI(i<(@e*&OABxUe24kHt*`GRKCfDPG6pSO zlf}DP69?LGTOW6`q^ANJ9;PTqqJ974pVU!vr?gthSl~rop{dU77E-2!s$QG) z@Kr3 zH4te+s+>)h=zuUV)3boDc}U5zhKVkY$&(2}&-ye!2@TQd{1%_dgY^g=&jOUjn+P4p zcO4`WZi911s@e5VnN8kre>b{tP|n&8s%fca$%Shs0Ka}hYGxGgbyG!?0ATU-_{-1H;y_q*;+Jl`0u4FdmVL z)0jY652MR{SHFsVB;7$tJ*-^uDLpvumRABX?z)4_VD~2j{Lk`;xCgr=y&gPH<=G^& z^xGjC5V_FQiXp7X!fk%TOC1F<8UuOhpgc*{mFj7I>z@ki2n8huyE(@ukPNZmXqM@2 zx-Xizfc|?V5-#5KrgS1cd9_JjQFpxio3p*Bs@fzy`8^eOV}cAPGmZ#cbUO&0mQhM( zbLU;?AU(%TF-^$T{(_reiB(Fdwm2uG(9V%lycI3mGg^$6-2R(pH8^}QT$zUgCcCOQ znCt%{?*4R_O&P9+M}qLFCv)FYG)Go$zLGBeJ>?a?f>#Q&nnt#Zi~h@VT(kCm$0=Qk z>@0q6Qfhw{1ZS)W7q4~4c+Jf7qsvwp8jOn)vaplaJ@ zMJ1tyjJNJC87`mRumO3P2;$>61V1lNTmf^RzeBL~x|A*7y(P`F zDiZc&E;wYoWt<73+?qu&=o>_2oel1}iuTg652D*TAE#CyygHmgr-+0^K^=;wsOpbE^02H z*+sY;himUA5}3yb&~z?5Ph!W7)a(M)5Ss_&5?{DDw#&GfJAaaN-apMNnM$h=Xx`3% zq!F5QMN05-xRIQ7E-{?pjZOzaLXiZ55zu?fpdEBm@t1D4tRBU7xV{Y7LrdW`*PO}Lb)9cFj}lvfbb%0))i(Ci%&nk`OIE7 zy*(#JR-5SePK%86VlgVl?3t%Joui|^3o**!xFV9CZKg1ld@QvYN9*bv>HUiD3TjBtUcYsbp4iW8gL#xtGH+X7wr_z@dZRZE&%3F_?e z=nS^WN6V*7!$SIAB!vNTnYw*!g{{}i+n}=5Y!G^<((ofC^qQR*pG;X&d&gV%@>?_Z zpzF(G;!Ynlh$e(&kuxZih`}C<{_V4#ydA4h1UkitJKs@z1D);{mqE;2J*hlsQ?d8? zT8An<`y**G5w=?IY`gKZCg`~3x#ThULmb4OEU10e5_MBy@Kk9UggK-68!zGRQNj}f zXnD+x!@jyED;C^hzd`vag2-MztqVK7@&-;@Uc2z z=H@eNj@q!V!C!Vgd-Lj=;6g`P(j)Hr z534_JWC(uV`$|!!+B?!UHzOQxC!Eb5g)V`bxyHzwV0RaV2fN0M^?4%k&HDVhj zXdf8M^XFSLuP&SKRKFXHlEx?P+rQz~QUtmuG zLM{T`q@Z;q&|lsGP7swR+zu5}HmqZyZ$BlVs5S)CkRgmM%R6y12_-!8%DxVAZT07c z0PO$xkLI0RbpI_+PS-Tp_)#EMb(uzJDVn;BA^NOf3b}nxI!SjmKUzEQgmooOu2t>( zP7?U}lgqd4AK<1_S{<+H~J^t zbX)T&7o5*!prxI|T%%4Ju#3vb?8W}+!%M}-@zT+beY8=B+P;Z6VJ}CIwVSd5Z0^hj zwayZQ3%@b!Wb?)d2g|IEKwg-du;~9@h&YwW8a$blY>Jne6x zly;6tH}cbOwfD7bJf)j6xSO@aJn5m6PQsJMd;zbV)bf<>k&uYjOo_-y*C-LDJ_N)} zqS-=!o6KS6>#4+jtE94!jH+k9HD8BSWILZtP2~0mWMTHFzL0g>OmWW6V4xEw-B`2a znrm-(ec^6@Mw<*y;!`;qJmio-&*0*Ndm)~ zZiJIlYZjC?tr&ANCX?b4<*nYTLH&+gG-Q71iCaVl1}K`+1x%76pBH#b7HJU&8j+6A z9lmEI6ij;PobP&JkGdQCuO51t=_mi*ikRP82ia6w7xkrm?=8#s#A z+aHpey9P(ahxXY61@KJ+1XQ2x*fpeL9>}X_dlUt!B2G%nN2oqSYC5=Dth_nhY`8lk z{JtOgE~I@x^QT_8gt0vzR_YkUiEO zjN9Wc?=KLykrMXmnPDXEKQ=Ex=VHQK+9@wnB#c16FL0cO5Z_LN(>UuUeMudzJoBo& z0EbNGm;EV9vsY2ptZDGYuZq``)!L)$oKCrlY(O()A=~5kn5KOgFEP74X=juuzsry3 zrX%z)3n=u3K#|(qm2ZOdq@^{_(Hk()-2xGI{wg+;cgAomt{vf#X~du~LV-wQj9owS zf_qyBlAJXAv|qm;RLY*F>X_lbm%b?jo<1RsIOZl?ny~CYD|Be__lTzXSXw1B4%$ z25jDu*O~`|Hlxl~ms@(<#SH)bgL>|r`L4#3!so&>6N&1V!J*W3pPJmQvuen{n;5(l zr2elzaZ@R% zALJnT3k1$Hg5L!Ga(*!7A_?LJ9PD$?@tduXvj0MYS=pIq#C#HJ8itpJ@eMbt{2y8~ zenL%uubl%)s&yx!)FK}VTFKSDk*8SOiF2E@syX{OzWdC}9b0-kQANE5uUw+5xinVJ zJ8ohP*+<%rt)csfw-i3Ky8z$Zm5ha!!V3A zX2o$`cOpVh+rqUQw)7rw{cuSx!gr?x92xO+Vh~-XC!}G$5yRKN2AQI4m_O@pZ#3AjcxJe2z{gs)SR}mVJ;o|@C{J_O!=xu| zt+j5)7$J<;i2}y8(Hyo2V4Sk1DS7I=o8(hPNiewMZ_Mu6%RCNQBAzAu@Xc|g8>KcL zY?+JTpUDOD7A&*DhAka8v8Z(Q9lrlepVgs!q-sQ~5bI27hO zYfc!n+=M);A6GBkD2sQpH@YvlcZmjY1XZ0%(44v>v1I-OlH*O|K4)e4DaRJ9el!aW z`5y}Q&R$u0PEvY8Uw}&;e^}#aXidbPO$4o}fO9u#kDxAwa~uOyO|WuHYYV0Dc8yq~ zGlUVa|3}nW|Ks_;eHtyZ4HjKHX+9?TY<`W-o*GR5j=nFIxDTDs&&^$n`pB_OM|S__0_si zbXFo}Gj$THt;XU=H~dQ{ajg~_zJ`MMrJy-ED}yLlJ1F?YaK0>%DMaTdP*#!}gSGKK z{G@_o+;_5CBxpJ9ZQ%)+DKa^Q>kpf;aBqJyUBQg;#uj-q!g;5R&M>#*o%c56tkng? zSAz;X)%I0v%epHyuK&!B0)07QSjB6EoW+1&3`tbX5)Jgs3bQAUdOlwcN{os zPbv%-H&zH{oVYACwN9?=^x%hDt2s7V??fT%Wr*W!!qNwvWLg-7+AbT+nu0&~i}Z`p zoGz&gjtS7EPM!0L27_~1nV>th(bUXM#cQ9Mo}k=gG_ZL3Pr8xhB1DFz63Abp@D|7GVzj>vvgyR?5j}OGO05G44@+(k7#lHY3RJke9}n(aGczcvo>yd^X|K41 zooT%3j~<<2^s`XK>ejx>feg?TDTeApxMDK?69*&+>yH|<6=!9X(MhOUvIc)dlrxQ- zCHKXkB4WTw`ohSMDewlX7nzcr`!R-7nbj|s^n|gK!1$1Fvuc9)@vJ|e`?ox8jbfPriGylyQJ~np86cI z?RHGhy-c~@Mxj`>q~K{|#Jo``8t_#Ois==a!VO3p=DJT^EBEl&17&DumD;9o?;=_v zy<7k#1tK}j(Ca*$X2Uv(1Cz`xzUwdN$4>l5LM=s9&?oZGf|M9eI6^0+Cmw)|MJHR}_qPo_mtWe_wp)S3 z>S?;Y-F1@)gbo5+sU*DwHLndBRq=${7Z!FMMn!{u;oTUA0c4dP_Tcj*zS>9Et3#ex z{e`Y+Hco69e<_=PukPKAmFF5=E{;?*JA2x{Ioea%s10*yRHZKQcl0UPNy8*-OTfx= zwS~`P*48@q3%ac}J-|#qd&=*-DS_JP@oh-+QLy8QLC|5qeYw|^sbh$SDBL!Y@hB&7 z)#Lfe}pM)`uA@g)Plpu<2X6Oru>cEeYN(e`E+&uaHQ}7Q;Y}H?kO)M#lcDq znhFaCC87hUPNg|ZxU;cKK@jmdY4^|e2q8NeO$Me}JF!UwZ1dL9xn|{Q*@7HW zV;7PO9UMBvwgOd0R3ZgC9L}a^S<$&e5Q)&jphoJW8|{wYzGTm3ay`2$WTtTzxF=*L zMi=1zy4o0+ugCXHqT%aMs!TLX7{0Qyi*AgCm`@sVdIj#)-7aN`-)pJS*~R$Tl+JHR zHsG9?(n)%{|GoBKHv{3Nb?0oLUR&w3O+!g20+W2$FXuM9?#xl^`S@x{=_K_NIgm5^ zrTnKf(Dxh1g1=N}92K0OOvkMEFGvx($y}EDP{&aM2nl~sP;>LqkrdAk73mJdK69*h zEvkLTw@{lbmR)Chzu@%pF?{L=m&!&vr#;h~#%!S-dS2pOu7AJNM8DPE%L_^uk+_CX zMC4sxp2AYNrjlrzanP9@FLKcGrquM1z#zW9%@PabnveTR;)^XQ5WC*T zBdpGRv*PwO&!VR=SUHO+GsNn4vG>&SMxXxN?`K8SZdu;ecjmo}9 zhD-P`>^I~q(nmbRJ@ih&92+78(Ih^OXahxT_>%yrD%QFZ{+W&FPLZcb1>7?S1g4Ha z0XoxWAyF;A*1~iRrzC+}*cQ5gZR=0XQF7UEK2L&xnt@2R2bN%5oB=w3;%Yt6*?q-E z<~L}=Ur_un((~s?3(?HKc9)Qy>Se6E*OO^z^eQ>#&3Iz6g9Dkl_v%n;NuQvwu(#|= z>31y250Ri{x5x>e?l%`zRG{_;R7=A z4Lc}kMuaK(4{=FiJx<(|32Rq6AV0NsKK@i;qgD@(?<}kJ_B2)$URligFafSG@znL^ zh_CuN1Ce*R{y_E`FH07Ph70q8C2K3&gMPM_gtkC$`X;i!(D7j!%gx8>YfP{CnT$r9 zviegG`H~@(PK=Yovt{+FFoX!z68PY`mg3Br*%*ps!)|`-l2EwWyqq6+_)9y(@I~_u zPcyb<@)IadQyrv-A<$iz=`{Y>{B)e#q80CC8Y+d5cPLF>rl2UVQzI}CYV8<}r@lGJ z)6u3cuNg99yWGijqmHu{{n&hM*2akg`zIQBzn|r~1MwiRE9`f8s-5`|mdIZ?a&Qtm z?}(%0^2>{Jp6Rczi16NkxCQub%%FUzNClCrYAp%pK*!ab4$3xuw-<*592=_tDQl!yR^wYs ziMR{^SLILuS*+QX2_r5%yjt=>>Me$o=kiU|CSCNjx9i`}cuxEUWO|Da+lYi=nM!(O zz`t-Kh0^mqhJkG((><%(anT?%D6)Ples$81B4gHp9BtAA6nN23K>RJ4LVct1750FBY0Y{RTip0{G3>}X?QLaJ>d9dG zHo-`b<^4$#hlt^X81eY}q|drCtH%_Qk0bIsK$NKE?2j->SjRi!WZZOQi6e~r%L}p* zUpo2CIykJ>nw9c!G(EYccB}ZsIG}Get|R%mB7lSnLqTy95A^}bhpII}FHnf+AMAsM z{z)AsY}1!X^P+-pmx*t4YZ6jV=y}veA;Fw7U;Y3;-=o&KYog+lA@JL>B2mrW6~G}0 z3bu`s=(Yo18rB~QROjgw8CLMTcwqHmWdb+EbaR`1>;%?}I0M^_ummxiUZAIsNZPssD{ zH$Fv$5|4(u&JIA=_2x1e6}(PDE?>sFu4f-g#ZnRszIx0#>Oc_f5rWjG;T`T+q{@Mi zju%u_vbe5;fV`h+2R$U$)Q!nY06587zBe*|IH;2S!t6XD)&7QaqK|GVV;;kS;dahU z*MU_ZGU)llIRf-%L+5e;NgXi zx4c}u>;C{s%~lm?A;}0wmWn5on#7eD=y`NKhIG*??bLH^%hhB4IjY+`*8HsE81dBH z-teD29d}x&HDAt5$?>NN$(qKry1o;Y}1TzK9wFC#yh2)!i+yP@Kos{sJCcY&@B0-4U5HMKDB z!+^o;>UOyHUaZYsm7~a{k_^zd%y>Sv9>STayjX)B&FU9JuHs^y@uK1<+pma5rtywQ zaitSW6iQQ;UHlYzfxc0Ii-kEt)=X|Jy|P$veXS!)yM4^QTqeG+k|H3IKE%z zu|!iL%WH)`vM5d6F-!YaqZfZAu=`Vv=VQ=;3+t3oI-dMpp^l8ubX$2;KTA!-`qh$| z7$SG42F%UQ@Igt?tj)I{`uCBq7@t9j;ew2+)kCqS28AWYNnjy0Q7z8Lr&;!PQ}v5T zk`m!91vaWfyskoOKE#(x43vd>oX*>j8523s(ww{9pFg(_Cy{DHI1sRmR7AcCJLv z=CG^wlb75}*P!_V)YQXxu)isSAx(vl%;DxX{Gfk0PvHieS&{%TA5||Sms6#7<@xtG zCWI6GlsdSmXOC`HfDlp(a!`b}D(`gC&77)ShC)$Vnvd~!;?mBozsf(IouKU`_EOus z&pn2jHFWJZFdv}@ToQy<0128iLGK|E0^9L|mmCw+hy}deD*_M!=k3~+7G=h-E6INZ zn$UR4KTC5#4BW3*z!62W<_u`5;&RI_8{ojp{JvYDGCn=6`j#w)97%oiOwjE>2aBJB zPr23YM5<8-v-;H-S$;&F!-lD$YXNU%rChxAY4li);439EQ}z@x=^rio&aW-Ma$`yD zz*&w`yT5->kF4u{rf6qzA*C0|5^OH9+C^!gMW|Oq5$_}|kau^3`K|&G5PQb6LW{Co z&MO+bD1z}1D2*H5d^xzkaR1c~n|Pb2$$*!|OH+ zstoDPs1Jwg^wULKcWXbhA-l2r>GEs(x{k53E%k~fV2m`~9Uh$g9E6}At?~~k z%klDG896Z~d0rk}K>?}j$F-vl9Ni~KraL@_02(*-cg;gUm672feW>E87O?e991V>5 z^`0KLD~$(Lh}#mX?ejWjzo#^EM1=uHOu%~MFpcX;gzPzL7_+dqs-s7zjKJ zYhtP_S;hjZgiMTmdJ%VIWwfM6X%?jnV!M8&^MBUs0nlb_!q{i5N4-ZyT(LW#xw|!Z z+m#6hYRoH#`|2;fQD%yBA~0Vh_Bu7}{!SbV|D4$rw85+*X>ZcxnCv~1crnFlrym9k zGYE5jN%V+^U!>CBL6V<+y*bb~^GIXFHK}3vTsdiSIv-`}<;qi>Z+8%>)TDm?r&>#u z>Z~ zHsSOK{^chW1nstM>wlrz`_CtzSMEOz`B!rP@GCmm*2eouj=N-gkFU~wUpDcW448-E z{kLh)YiS_b>$LBPHI6OO-Hcu%MT6xLmr^lWSf0;irvApWwR`NELstUEw z>nkHou`68SHeRu#Hj(59PWzV*9d>~?F}wOttEq)i+EEXGF!MHUA26F4)_%WiwsGBw zenNSB4h}l%hWZTJL%(md6rhK(*jq5@bC%;ScpyeejR;n0X}p^(n%brI(cB9S778kf zdudOiBA-}Rm;J|U?KE(&cQnACwyTI%&vQ)1t47EifeNo7E%{?bl z$R4_*rn?8e;3xg#l;DiaMz^69DM;qPRDIT5`9Qri^%)qP^B3Wno(5ZPt?r`Ua;Si- zPdYL8yN6R^?mj#wo-m^QU`iFwAcdtm>mL+;n5pX&0}gyX5Jx}OTn{@FxETYlB-_7( z!h8AgySoy*0)+H!_2W;M^^PwWf)DGWh;h2%e!oSRNW{{RUafl&`X@i+MF`H9NUPrM z$tygELUe8ZDga@WmeGx}=qN{SeQ18dI#&Oq4{5Y!Y;{PwuHz;arF&leK{-MMEzcz4 z+JV1%T<>1|tlB2^tLS(9T(|FBEi1qi!n5qx?nD^o7m4GQvsvm68T9kH9yC0dJm@)! znJaLT z>QfFWHZ|OT1gro?M1xfME#caQ=%Ls-mxG=DO^eussn@P|@%6jHxM>0&Iqp)W0D9%M z@gl|PE28@|{+}p@Je{ZOUO)l|Tv0;TV(O3Vb)m^mBL9A8!U)uEECyT0C55|uNl~tI zrr0YsZ=(yH{ZG=H1&!xNN$8#pPuEp6XHG`RWa@l%IphTjSxHg#P%}IZ536WRU9U&J zZumoLONntJh`8A2aXBT?5mcn8+)KjQ{)Avh*zfWfC^igy-dC>uP=v*gMF|^o(v!iO zo@!m!Lv9^Fe8sZe#`$U3n?%U!qLFF>g$^2!D8xmN?Z1;olx0$yx)RjkD<4d zeDu{K6(G&Gs}T~_e{WmEW1RnOo-Bl1E7{xM>OJ^UlPnHBtsw`&Jgi^KLtFZJDsPWO z+JE?u${U~AA$8!J&#V0nS9jl8Wpq*xg!qe^^`P3&^8~vfgrifNoW|*2h zDz0Fs%Ey}cJKGE&O>UiGP~eQU7VvA6$jb*uePi3y)NkW?gQ08hIqpoq2zXNaOmOM@ z&6HF=$FWSP?3{L6w|$Bj`<{TO(@syjb8YGzW3Z?$_hvI#dO;8t9fbAT(M(UiqhhB# zJ!?SSO#Fg>)edJcZ`psl$tM7Oq!guMyqJ94M$o`{>=Pq+xG29^*OXPpzcNEgt3xAcq9d0AxR;eDl4{K$>wLq zQP*@xMkqi9Bz=O@8tSt zY%9>Ya%%p(;2eV_-T;6ocxZuWC>Cnsq_|*bXRC+=q|726;67pZ!7=L7hQo(C8lV6@ zBL)&87fksz3W5%Yy>Za}jR73v-R~F2(cT(4xLpSL?fJ#nZX;PJ590=q7Y2nB>CqxOU#<3@tl&;FP0=? zj<;ak}&nUk;d zi<0B-7?Z`PHf+_!Prbxobd5!)Qzha0gC{;0;$mQa0vkOos*5m%#jCLzm>K8)O3~v9 z4+OT3v#awc)3K2x)tJI6-zws;TPzq7C@U`26inOZxNpm^c;Z7si$lU?xqP?Yh0DPm zv+1X&6J!ei`d7H)qRpYFfG zVA@{)9RaZj9rXC~OvrH4o3|*eT}p|FR%XRD7`KcY+*4ul@pXv-z;Igz2d%!|ckRYj zY7>HfdM-FXB9*1uXlG{k<5Xhpy9abgtinr`QZkz&U7kR!0(!YQzdQ=iu-Z_h@8r4u5S7@VJs8-^xZY@e!kkpQ>`}gpc!w=HHd`;x z*?d53y^4DE2;zC(e^&r`u0y_1FQzHFa8oZTzXM{3R#}dJUB#{_a_(lqp11QzvVi~I zBa$&D7GlYxIUY)ydW{tq`JNo`E$1BUJltN{mxSs!Gt$-ZxSKH*Zfmt#nOsD& z%gC;ny{+-t)FX+DQW-3|rWFie937hxpeP;So2r+^wantkj@1b>|GIPIPclT+;oLCg)>$?ejjwWk$@dVQk0EvWHPhh7-8) zHb1)%a3H_I!@uqwfG-reap9*Hcy^!Eo%xm zOxiHf29QEuB)+uKa8EY5k`!R>_6@&W#*A0tx>cB_C&C21S2ka^gJy&%x*jPO+8d9D zQwcTSP)ibHe-q`pVKItXx^=!H2Z%37bYfWOQhTJCk~lhM!2u`bUyj%LDq)1E+E_b? z#G>8k(7}Vo|!RH{Xbl;6vDr@ zH{+`>^5b~`A@S=(20w>t7R)xO*m7erXtgXUq>4ZxD1?}Z#?y?&ie4KZVsF)9#;BfSYU;TbBSzh|VZ4-(=LMGq0~bOSwmEi#4yvS+PpNtKJg4dJ4O1Lx(MxH5#^j_& z&+HXMW4>FN6^5VpKv+#Cy`!X;g2*g;$7ZLrH2^TsVE&sxQ%Yb?ZHZ!w-7~W`!T9a1 zWqbDN+Zytgl2gKNkMvU`pK z;t7|$XilSFoM(F?Ras7Y?zO__9`rHlxq2Fd%W2?Bm(R>rFapjO%m}|OGn9g{*cdaIa5*)_3X{KA$F5dXcyF<@{dDl z$a)}Q5Su>{u#j+VAO-#FX(6}7uJEWiA`hA;52Csx;D_V>Vcd5=-eU1WD2*JlE?L=0 zG27AX+#O2U*kREmIOgg#AFd74V_+f9NeT};rYTb^Wj@Z~kUFZh;E-{Bi&Pf*{Z?ys z>5yrm(WLh6a|p&V5YWHH+12LDbLHLW)QbM}^9F6xAmQ)Xtsvcg>mKCky|W$?MN)e;bH~L4*|Dq`ISx<9F~&8<{^EOR&$Gj=bA|e!`-uCxAsLPrLk0nsw+7RF)t4-8dhYZr zK4QHCwtc!5bI3C=PqYX|R|tM|LiP`_UQ+}jGt=FWZ>`Q|=8YbTUR83|80V%Ra6kzx zZNL8GC{l_)YcQLEb_`pcEb1~cTrq+eQU@UL^xD?o0Bcb?7lJM5fllka6L0`ROLLFt zvw`MP?Od-ZarVR;r0!1Ctzuy8_=t&MH@d?rW1Zq)H;j>M-bO<(P5G^5yqT5|eB}ek z+$fe6EV6!>-h}tKo{b)3riA_*}9F z3=}*@I-w57sRt#~b987xM9-~2Yn%vU zhx&5Nj2st8HJ0}0XdbtIGVd*GFTm|L2^glYe;Rr~+AEitTpUvTz=?*J%7gJhE||C5 zTz!UlWyBl5EujRoR3;J6b-c=I7Go)hW3VeOZQ3@s4x7))yXucA0Il#-3U)**yPMt3 zj__4Hv{m0E0!?!-w|@meW7H3UzU}J6e#cp)#g0HyS}XA>3PANYFQmG zRwu&4=fZE^sG8EsweD#XaM=KIQuxJyI;LA0 z{b)ESN~$;{j}HxW;EMduY^DKDoAoi>A5Zl^p84i`!mTAnpG}agElFf>>rJ;D7;b-PE&iG+O7{VxEwEk+Gsh1PD0_1RR%Z?AseL^%4^1df*;S)7{|$n1Dd+oNr4 zQ@<|OFqR1v=vuM~@=)j_xyq7Uf6_U%M3PypTD#MOu#y8~waVdeYy8&0_AKTlqpu3B zKb5s_vT2eNE-fE@+mhTXXqR@w+xIL+)a@>tII5a8!A4`o#J<`XI?Beh2=t5nl%sHk z8CsVpnjJ1u4rNJ>qQi$4)XOhUw|S~rY&w*Cj9+zDd)ezyDfC350`JKLIr`M8BqjpK z&llDdY$+n~k=X;)*J~=#ov9q36fcU#6`xQo(NohZ51XSmaE;WiF z1@O#HF5ypEpXx8KD~&W1;M5O*G<))8kFabqzfF4hEJ<_S(yx~XCQmIEGr)5Oc(g_c)I2b+ zh|_aIAAz#n1y)O&%-P$uAxjgJ-`<9fy=?#Gy9eQ9#{xrnm=H4<3Edgf0tG-$PM2%h zA2ri^a6sH*NHUyi>@Gn#$DSvm-CBK`UYMZ0Pxi&kX^Nrv{@TdUT|P2m-T;aDw1S=G z5Gd$$7)qPj3g7psNE2L?_itqER&)2qpD=g2s4i-N!5w#W+IgOdfP)TGTj zIw>a~Xq&u#cxBZL#-1VTN6q6$RmLk3VixsM9|2vQqcWU-Zh-&iT%ItiHWLt98Hi~l zYo`A5oTM8-t|RT9GGs_|vU-e&7i6N-63wWe00VU3be)XbP*ju^e~WR@9h2u@n_oy+ zMeugJ(>HH>WP5{Jo%1*&>%&D!>p}<*I?*HK%%`-2Xxi!kk#{?gkWnNO|KOzkp$rqC z(}!|(IuXLxIDeA~_>ya_SG0E;YTq9^*cbextZs1qw0K%Z`c2LL#pY;bfuq%427K~> z`DhvcTzCdpQYuDMH|Fb)gGCq*xvak)v6g0#ExxQ87-EzgO-P-eJbiG+2t$Vj zW32iAzlS#LbeD60tn2sH4C&Q6_l~s+Bkr|9I;tWJ5>S`zIL&)ANgSK*MRraY>N7DWp6Scx29>Y&1y;)eMDO8a#$GaL1 zjl7{8$`dl*$;6G0G$Z0ZrrvJju_ujHq#wMU+=RnMy-pD@98cJa5bvr;V>l~twcjwH zr@vixE(}!(fh{MrASaDY+>JMH6YZj(Y0GsfRArI*kMMTo;V~GCV0-#xbfNh{K)HD3 z(R5qxww`YRPy_3^nRr9hQ4r~Mol`P=5}QcL-lZv#ER4OANtp56DyMvl2i-WC$_#~6 z6*|B;aq~q^O!pVjaR0;JDNmQ{pf7PRD{3LDe^C=16_N7GC{%V8EIpx+2<+XGCB|s+ zf02+xK{{_kw@)rf7B|KuQpG%&$Pm%uPC7N4qnQ-bONj-~x6t(f=_A38b z?g{9hGrTprL;wPCJxWVU)=9fX($JMQW!Y*iF6?x)I)BSPiVIxTT)f8{Wg{&I9^4w= zsui&|RQbuBHbz6=t9N4u@Q%KyO;-9GqLTMyioj#&O`^Cqp};uw z0*Jb8x@2OUjI+o5jk9)KPV9K?PZ;s?TzErS1o7j)xz~SJc^*t(}K3kqO8zP774M0 zikodW3We#~PUdE|TN5S(RgC&&T5C?5095lbnpKhLiR@M$|(ATR>_zs~F z7kfylm!+4*5VJq(f4*52znxHf$a*L~gYeU!K*iW9BrS$=0*)E&X)g*-M{q^<)S^-N zY4{VU4A6JPn|{71XT`i72Qj&Rwd(~`9iEokSZ5|4=P4iqb*p#CAfENeb;UR^U8^JS zvz8QmTy>5_Z#Nl}U9V@*m9G1*ruF zEym;A$6D_L3)O4wV=xtuC6@1BmvB_dzz}a9cmf1;xk70zTiMs`_cTVN1mio7+y>HX5i5b&)d>exAqst16nxg{@f-<2vMyH^p@% z9ufsWNbOn5c#<(E(-?nZY?1w?494OV+Ve55Y%vw#H?QwSq6nZGZjO1z{#zIv(mR`J z0mN^GyP39lzW91)9xW$|Ln1JEUn4wN;}FqPMt~E?nbaCWP&+wu^UkX+(F-bdN=D)l2L`Od3dh4TRB4_adbjH!aox>aN z4Z89w6@ttrZuWZumAdLv1;AXFSMHkT%LhR_;-_Fl?1UTl?K{j4??kXO1}2$=G?8P^ z*dHXH6K80aAeFpHKXxTr+=<*tX z(LeR$QP$#|A{^_iKW?RBJIQ9cT0P?b9K5rdITWf5lr9I_MdwBm%ZY3tpjm`8=A^0p zhkckdIL?_j4{}0mr?^Ef7$QsutXHV**zg`h8Opf@AKg4;nh?4%4Gc2=2W++U@IKS? z+IrelK>s|_Q;N|e`turXXuX=rwusM}Q6M@CR9Th!#yw?~DmZ*rDb~Yyt~BdckS5J1 zK24r=2&&-8_tbbMKmF#WPQ*_e=$lc%RYEY>7ehHD3pD-&82RY# z8-TqjyFfmLj->y2@ausNTSz(Cq?5%FpRh2Pai{ImAf11Q9AJB5MNti-lYMRSlw0w? ze~1J3Dao9%F<9t;9)o&ZmfL>#^vb~8WY;Lih`mfZr`WxNgy4>x^S*UlkPmWLZ@cBR zrC11TYig4I;Gr98wsrC0*yE`CeLv-0}MP@v||N-Q%EwBXUQ-R44M&Oi^T zIfLnu4`i=3;q^196dmI~$$0I4CcjX0V|O}!eWq&$oIrQ4h0djf8Sbu!B0CD*D4KZu zs9#;6_@t%&BRsFb;jDJFyia*Q8Uwp??K0FT$x$&#Ys*o71^hDLbdCs1Gy|-vX71|U zYhx(a`$ipEz>F)^?)$R;+P&n#H?B3rL$T^;!-MIhFtT zeU`9#n(y8-3hFOezw-oXued_!KCw9c2gChnLwus-PaxT5(da8sKWA_MT(}NyvOeHy zZqjsq5KOBHuKK#ZGGxy8^XxLpu~_Ds1t~>*&Bc7_Q(}CPl>%^s_@31GxHlpBmP%(* zsCZAUxtixT4V0j*TuL21?$2WwNHmRIPHcse!^aPjjHlX%%NzFoYSHcN*U1?Rz=U?* zw5?6mRzCN2BX9Q=s%9*H!B2)54n5-pAVu;P^B3lLl%F&0-EtflHbmww8qG4Lx1Nwa%?kPYWNlDP>!n)TH^5`cI}L+d1J2#Katqw<|!kZ zX7~<1UQNg99ez~*41l7GP#U;+Sn8p`;3yx{=D|c$UXsn+(liqmR^M2z2vm~q2wHVD zheOT}E?O)|B(u85LYe>z_{6}iK07Qt?@T30^%&Rzxm8TXaf?Dha9x2@p2&nP<6r*h zOV01=7+PXWE(zc%L$i`Fn3MVUY=jJ`U*BUgyR=e}cF=z33mI2C&|!61&eO^~TbsG( zL2HiHrM0opf$Jn-^AAZ{gw|TO{OC5I8&s11`mcL*i?0h{cy_<(W1)jW`u-!*GRHrMQyv zX@kl`}*+u}5owVF#|<%>z-XMs3p^KupWTIYYYPaRr{XO0_3~sd(Ex>m^bi*W z)%q|a9>&r9c#6u=uSxZxSMyy#G_t?HwMjV4K;<90j3`oz^} z%e3I0+ExYSXzjGZ&NUkM5R4t^E((OE-Q{(=hpQ0@!2B!7?Az@ulJ=f->u^l}gdgT} zH$)ffdZH{wVGh`{X*t*DVn9V5b&Lfd8 zLi98BZQCTR{nQBl&OAJMTw}=d0D5hGvQom6TPE$Mz0emwFN6<817?JFow_=U)+WuG zd@!)nNsJCoFQ)`kN+SyRES!c(mH$xm%tE&-A8$xYM+deS+|Tm;R=%&VzR2=|JS&cU z2t%?Ow0!#d$b7ZG>Gc)T9^4mXAen##mOg|&lJcJ`WsQ#j=%Wk-u2Rhvu{v#%xPP-u z8>q<8Eyj5Nc3vYCoUG5YMoqtMY|x}PqnRme@wXbeB-He>boa08a`IAiWxgD4$Fk!& z-lvNg&}{DXWH;?g0kqd!oS3oP=lYcRMZaNQtK`4Rm%RY*<1Sp8c$vnF?1N6UZ_K5k z6g$j8KFq5-$r2r42=MQwC(Lyw)OFM6zu48$Eol`Bk2Qp~#z4z9T>D{KS>YeoUgoc& zn-V!2t-=ZNMj5Qyne9e5gT9=H>iz!D(N3x}xGn-o z1%`lLl1iaLwrjOsR*%8!?!(k9M5{JB)h;h8?Ot_m6w8W?Rmt(gR~knHX7x{NO?kC5JI` z=&n#2SaENESS@kDQZ!0e&5+9jLnL_b{b)e$5?oGhza)$v`Wy`v7{LJR12x0`gpiPa zDhjk05{G z-zS?0-dnH_E&JRZ+2fut&#jBYfib!F9@ZKXO=2~5VDnCsJ%l-PYGf84%?@Y*SBT~@ z@KGDRHOK1qV%x9VU%j;x_M8W5oXAc(WN^|r%>S&WLEjSD=*$IJ@gRa}@7N2uAo$k) zfTz>9aTy_ zhOIE{uF-Vg(Jm9xG~N1JN#jX-0RE%(qNbv#2Dv8$=&mptfZ71QtIpLM{I~dyR5P-SdxD{O9>d+!`qAnvq#G1Zif4;JLlalW@m-?ABDo0O3qonD6voMYc4q9 zpEt(CE-%lk$hI|(X$b?T*vu6`Yy4*w&M_)dAm>18WecGbR)Khn7ZsHXT9aMv?5Xy! zMeM%(O<-g&fg3MLl!$}Shef2RjnZ42Yn^q~8o}8Ud8Hx_5Wt*T+85$|FJ>?edgeyL3!4^Ry#mr#=L)Xuq&u)7cr-7XL6SI9@@qz zCwIkgL%3a?%u$vAw*M5*HqlR4Od3|qS<}T1oq?b1xE(0ot|uwSEAC>$V}1lSbmBm3 zHm3Z%m4Z}6(6&qz!1vn5*ntsnh`wVo0o)a|Nb`B<+9U6f8ERH8)1JJ{bIZY0`r=_u zn^YVQc+>Ryzte$Spn&EA%oZy1+o<4FvS~knzxEs|Y?*@Zc_%Z>GD!xNH|RumZ3Ci` z7>C?^NL*dVx|7(IfGtp6&M=t7oUKm4;wmbWZ7?V%pX*R3AN1FqkY75`2SB*sbD}cLb39zO(r;PMjoTc2>8iq?>CoLd2uASe^wel#*qu zPqPEL$ctmt22`K>SeeCtK&Wp4rc7XUVBy~C4(C3+15Ysu@cJAun7m5Bs&PmUFr``- zUx;OC(&q!Y0jBQicb+osg8yjWot1%-zIQ4qBZ;FgzB{RB)gFqLh30er^m+e_+k{pR zaT|yufa=%i?k;}EvSBO=poWq@8_S5Pp^{bcH3C*`i^BsqWiGqbAS7o;^;^O*Mzo)e z-34gUz_*)|-(JugV}LdR9Qz&AeA1Sais5(nd@#t}FcmHwB4(0@WWKce z(I9@tn`r;^hslze_;eg>V5_?z6oTT%t8$=F@%jVeaSWDiPZZTr4%9b?g3hDK0tmVX zf206mp)UM4f6-8&r=#Wh<8`(_$fz?WInN4NY}MpH*xZyZ-JqIcTbKQS_d%280lVSl z9e*Eb$R18qd5uuA=z@E{fwhG51=(JS7OmIsi@%RCkBo4vsaCSW1KIk%P-3|Ya|{;k zf9{*;IX*YyT-VG}P!#fQeMHbhw^hMv#oxo{#am}D5!4oVedv)uL>_ulOej5x&?foH z1^-vPORz?AWx+YC_PA`X6Q%y^=n{y4$C6h+s7@H@3X|Q8P3LC81}6O#J^RVj4`1w=S_Q7C|& zgHI-NuziDFf(sA;D)82?YvFHdKx8SZOaS|-GHnU)ml%C*0#Y|~&BeQP%L5ieQIYtA zr8KeduRXAff^(=FgH8n0v3lqCnHpTqEzV@bD*of`Af%l*;{O7q}M7 zHz2Rf(@9nv_>Gb)13%!RM`JnQXEH&aaIvRj zDV0oBEFdO3TU$OPOu~yYeQ zBtk>hj~06*+t7rxSX#*vB^2M|%k=yK&#%wC&b;P_xzByhT-%&;?(2HLVS(Bp<$Esy zi!p{SA~>8cek>8mw3vFHnfFs~^3hy_uS;==aDLI}B#OBMzd61IlvFi%e%0Kj#o8yF zy^&aLm*a4*ywGGuh;{%d1_KmxFXuFyy9qN|AVDqA9hA0VKw7tcR6J3@mT2dd@~rRq3^1hnq$j{{U+Cdmaaf61lx4X zycacb@EKS59L)JXiV26X5VP_4*fH`?EUfRk=3@;v(;&t4YhEuQd7(Bgre7+-Zm{;z za?yPGCR);htd4tFL+D~+t~LgSe%eK3`mHKcebEUGZhr#-RQ}iqtwZp;;hGd+sciro zuaM#r5!S`9&<7CLGK05ukGG=xJ^nnEc$zW|m|326R>LOQo^ppko}iyl^qX9_k%ncJ zV!R;Uv{oDwrP|6JG@KOS#EM(b0C=aKkeg?%y)!0go2_Nzx@l+-Ui`|!h{(8ICGL?+!~A0FQOlqA2A~V&_OnnJVD}vWa|`G{=n@`k z2#M^Dol3;%^lJJfywIVio*f&uc-oVdQG-=5dmQKgyVlq#8avBr5|%pyz=~WynY5C! zy@RZ7`CtE(zUkOwtoZf!y%4Go!<`R39K?@8c6tv`5jq@j0P?Pj2m zp*T2ySu~rw%Q-Jp> zDdC&XJTXssf+ILoP|aqWNcV$#^F2Q+-&%di-d~8xddFVNe|OX*k-IsDCY3N#+WOrD zPduV7EDGk0aw;0^`;2~HU$nS2X6TH4RzSE_RSWozuZZ)wA3YK9TNczVRhBu&9Ga2J zR9J^bgy4^+YCd@is@Fi$%B{8*GZELLF9hf00Z+K?h=%4RayQ{3XTzy&!DtU||E~)x z2yV;2QC3Pw_2zZs8HbdUqLWElP4f1~EyAuzv)nRuvL7>Avb;96%?N<+FkgzOx;%JS zfn&5rGLN8cqb*K>XKI1Uv?4Oy?6P%k*$Stc2RCyb9=cEvrr*I;jy(Vg-!2o!o-vmO zFwZR>Cj{!sv*d} ze1<@Xx~=4w}$BFvqYbCba?ioP0+WU)D4@7}VLOEy~{q?Kj4xF|SKOJB$TWNRKuaen#Qvd3Jd zU(Ml<*CCCGe9qnj709rBnGq)S&BrNSiUqtXdHPzvRdI(7LT!74KwTkUuwP z6jIubN$(Q)vNm7bD0I|T(H+eice8_dCP%ryoOr4>l_NfL>&dgm$W{Ep2jpXUyq3^X z*LCuIvLm52M1qx|AzM9TF~KKbxOPP?Qn@%xY|asl5}5I6x(EN`2P30aX{-$&&qcQK zzM-Boa}|Pw6G$gB2scLemv1k_Rl&hylb!yY?4HqqUcHsTsMnhMGBwc%gJx5D+>7&U z_9Lk2&c=zNDJz`a^`%$&n8er9?cY{84X=n-r=8>i5hW}G;07RkEfbk;w1_|q@U4qU z=YKm7gsgcmX+xO6_ef5!*O$Bj@9=I5g@bh))Sh?n00>7rO%>GDoPmPv+fEAy(H)gh zrfmqeTRRPD;>G^=4$w$&oJG3JEDUH?ckU1>->Smsj70Kp+e79-0IAF94;C!{8J@H~ z`~|5&&cDNh!T0`eaOVG7>{E@6*xRIQHxmDVVg_W}J6|&+TWlj#SZqwVzfA?C5QU<0+nF=? z{FrtP#36AH8BRwP8>x4e`|0}z#ZuCJLFLlCaZ4!JNNpPtcw66~@`{?$4xh$I|9k49 zN=z!Wfc%-6Vbt0coFw+P?Lj>F($x{)Pc}zJzQ2GV^D%d^?oL7%*&-NG2PGp`*G+$d zHPkmboX1G(ypX5~e4Syu)w-c|2gq<5syE=pC3}j3*43MM5tmpVD8B?_q);y?b-$?W z_vwK)e4g0CHq#@TvwZ-M9gfJ`mAf{m6#LdHhzG}$%MayKj4}Oll5TNg4sI)4EzsRx z1P(Ivw#P~yg8g}9hz<1l>X6b_)>&`g`hrKDd3Va@rY)6fguq6X?(U_B@H$iZ%#MHN^1uJz<|QEw(pQ4!9+C1s~+v)b9};TZncd`U?4c!Y{lGO<~-QpQTY{zZ-kQ z$6qg(vR?#`Sz7Rk^OPMWq~IjtEhF#l$^v1BS zYhG|7bA^I6nfj66GGmeD=5mdean*?jlvwAEG@T33o^3>btvw-D` zog!4;dU_;?j`2~SBqN-yAd!1_WowUW$qKLAM`oi59!d0QQ zecL|8XQdTY8ug0;5oN2F1y8^j5wXQ%5lLntAEfhcpli(HdRz+u3xj4+&uz~zw2Yh# z!%LaYL#dG@S)Ht@q3Y*Jk|rl5mo$fu@LyR@9(C5gw_fcvwffip0uf$Oc@pe8wo~FX z?l+D@0+tzG`_>*RJ{7nU)S61eezE>B`Jbzii%Lv%ScNFzc+e~hZ&Cz{mGXxb+J=R= zYkId>_nm1L>l}uls)?a0d&viA@YR6lZI1RRt6nx=xqm_uG)~dPbQ-Ujt$~BG9T7wu zF<06i(ptCvs>gb zivPXu7go-(a8+SF4Z2p9{HZf~t!w*as*bVvoeHMoA!IZca&>yqpDGE7qXme#7kJOtIg z9xmd%`~QTz2Bq?nOk2i7Rl*;jmu)hBJlCa=`A)fm6hb-Uiu zjfn<1l6@M-D=_|e()M|gXF>IE$k71v1WZ2M$ z?B|vTMRNRXzvvaS%WV27k;c~igDkc}UT?kjO6rxgP(KR`S&yjst5;6#(B*f@g&m4( za}=C-uI<<5?|pfC>>2Wt#njIapNKa zGnP7>zR$;riBEi@sa4K#sDAb9JlS^avd9;N!fzt8n)5x@-HvT~EsuvV#2G1d?CjZb zM39|B`oEHcnk=#Mlf?0FM7JMSrms#1PXzQP>*pMtH7Y2D{b0lhSHMYUG?iAh8B@x0 zOn&}XQK_?Hj3$5O0n?0l|N2q`)j)uh%Hi)hEQc)N`6T4C-%;ou7sHW|Pzd&sMr9D2 zGttOK`OojzY$Ykms{@6Z`S;oWmG`zYd*Sa4puMI2gXzCHn)_|u7gXZpplPIj>935i z@e~?uluU5zO^~snDQ?cR89x8?=`kwq_F5g!Xe#U`9?Sx7_Cx}p)iu>GJxgTG6*vF4 z-7U(uLgHbBFT;ZHkbp4*sgxKoDRa-INj1+?0C4`KEhy~%25eY59t!AACeiOUz~ad)rkf4T~|>y(NeGBfVvQ}4{dbtj@`$F-Yx ztjL^zOsxN1qbJR}C#rYeSk`pnWB}o3v)b_R6C+vU34wW0R^5zvT~J#QyJ_$yXT2ns z&z-!VW3X;kxs=_Z#NYUdu)zsl&ZusTPgAjRjA*5X8#p|f zan78kSe2xST{0pW)z zxb5W~R8w=dFvac5U8*Tsq diff --git a/static/img/header/icon-rancher-desktop.png b/static/img/header/icon-rancher-desktop.png deleted file mode 100644 index 2a204e899c94615e003c06290d8e708e6af040ea..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 8307 zcmeHLcT`i^*1rK1L{Y4O6eFSt8WKV%i9k>g2~|T!1wwLxP?7+F1dyVrC`#{09Vt>B zsVX8m3IZ~KB1n-URRKYzNq-kCw_6BWP(50RR>TeYAPjLp^f4H(Y|gs=SQMm_!l_&TysMQk_6J(~|+hK{nME0N5SQ zC!IGxf%7a)NO^EQ%wdR>d(_f??ccI4(xY9ccdr3VYIt}_Iuv$Hs!%@P{^Pr!=d5KzSe+SRs)lV77YL~L25C-D$>N`TYmUWS ziaYk`a8v*3lX{)g8y7aP?9GJq*Uhw7WX!n540Rq+n0TTgxi7%m_9~1mC@`+hnBcCw za>lOOw%T=@RZXP|A_{4L(fl`G?;>d^x=U1b^^*j3_3r^eVI=y5t7tt@6TIA0s-0oP zg*ez3mKGGS?$p*3`Ny_OU$jUw=8n`^NhPR9^z|LQyGvxzJP&tQX>_Dln&$!@ULd0% zBtz_D5z>xdXNk;y?4Eu2#)tdMD43Zb+V7uZt|`4$0Nrl9_e&7}Jaxv#$R<-&GKjzq zx$XVsi?2YxdAGY?2yMR|`jI4mBf9I?D~eaL6cI_2sxB9I?!WXUkzJpG@w(vqSIRCI%?=jdc~hNdf9t8?(^JP=4)&zDn7@5cWsWd4pG;o7^fs}yu-@H5 zUCVl_4*y;?&7J$BNP9KQHG{71NP4ES7!YD55vyCUO%S{5cV_}A_SLBBe zx$9cEd0C+KRW`M|G@o|_V?J=lp!rWJ)8!e?7r$AK|@7+b$%R$ zPNq_DYfZ8&C=1$vSOgY>#v(8zk_|!$OOZp^$SEj-ib@zz7OV6V6oKZ(B+P)f&kqnyM z+Q2H|xT7ZtD&lAv)E_-3ok&bu$UsG0mr8SI|1n@hbp{QYq*XR$6|hROC^_g0gO-(( zL;s;<0=l|EmAHy2i;|JUtc|Ru1qTHK5ldRFQwU(KABqL1?h2BabXOxf-AP4!)g}0< z<&R-KXgN_xOp*qP2|}PKv>Z+rg+t33$tvJvu{bCZC`BCVCw)4FYU}xbX|FCGxboMO zYf;^x`90UVzOE=k(B-T2Rdk}REhRX7ZBgJz@+t?~ekJJ#x*Oe=$s)OeN9`aU zA+DhET;mFUa4k_%fA)oC53XhbA`BsmLi{^n$Zr87SNDwX5i2AAjT7ZHg`YMV$nI+& zw0S{$A@av&_>D8j-~Z+7TQ2^WQ^4W>O!Bw*{fDl9==xg>{4M2ws_P%R{uTp&OZlJb z`mfQ&`^R+(q(P@37IaxEB>EqME?VntbWUghE31F06_tD8+;W@gcu29tBB^vS7ZBt?Ju=H2ShobjLE2?K<>#;*DiM+f}8TmXzz6sMFgUU$Ot&^7;rD-kog%9!77zYfe5A2Cpyx_dP$>>nT z3B%8KUd6s?Jp;?W9hVZ-m{~W`rUm@|Cj3p5QEX;+nb!y>nsLI4ST0*jDBYx{!w0>33mt-+uHjDi9ZfW<=+Uj~)DufZY|0p0EO~`w&(y3-_g!!#x_2cH?3i3{e<(El zdQ!^#oPp@_DwK}#c!=UUj^f;BFT0cunI@JES*w2OOEn?5S_* zx#7v?AX>dh>X5aXP2^TPVV3wd+^_a`MV6?ST8^g5{t1+!|K?MlL-G>K#{50oCVIxb zB6~j&?(6PIy@?^#9hqf~M4E2a@~K)h8cdUrjN7Mn!b>5;Yw}Ri3klKp;+DF_a8E>i+B~gNsPUYSb0w})yhkpxTS?x#^ ztzj2v`*kFuSr3w=gJXb!f3b4VJ-!`2ac1H4PIqboR05d&duCXyzM?<<+~O~9&mIbM7~l2@=!MreT<%^Ts8To6sps+!Fr5A% z6z0nb>xEBC5~Anx-H6l5u>lphy+BeyINJ3b;=PKxG9Vaek&|5Zis*Q^0SojSf=3a- z9UPH>^_kt~Cv|qhVAkgxuq823k-8lIE>iJUI_PEzKrkzwgX7UfPV^HIAWloos~+a> zGBUmX7;r#C4dCYoxVQiqOic}d-v33(zw|{{kk!-_^=v%;rrk?AtFz%wUN_F>_IU+) zKTU&7gVBVk&a)Nn5}h6)8)_&@69et{27R)O4PE;=1FE{5Qj%_&3{Dq?)Lrmwp8P$7 zUZ9(PZ>ieU@9y&3l1^LvLtk$Xg|P;2l`t!>)~K<{iRHR2&k?z;ygVN!+PcRRCp*l| zA2Lm!o`_VwzkpBZQSA2aNc345Q?XjQoi?$|?*C=FleL?lZqbqsP9 zB+Y3O&V!>9OBq$(%gb?%=gO3OE(99!^C#52xtRra4ceI+(1p!0bg#yx122V=R|c@g z5%nu$vq8g4{QT8m!;9v^<cUZusH zWaWc44Eu4y^M)Kq9?5y2$pL!{KA^Rpl6pSRT6&7T`8p^dR&c?{s)W(a3{X>JzbF@| zf4<~*w=E}t(_-$0T<+NN(SU{_Y}jVg{g6%W>40*=b);-7e`eFEt`bO;6cZ_QPEpRn6}*1D36{2xX>9Cq#-)JGXmgq` zvdRw`nPG^-i%X=-EeYgpBD5m~N@XJj?Im0nO9!!ALL>r@ue+!)U1}=o$Rpx1VTo$w z=ep>Do_$B13&op_WvPeU74CK)=}S3M`WlY&kJV>hagR+wi0KCPhrYjuCk%FAP2k1n ztvsh)E>v+U#che={#Rq54Bpfw} z4HSZ<4KCg%H(}0)?`~&d9pg&%yEAmqdnc=-wubzWY{bSem=Cce}aC9!zi` z-YawI)M|Zq#B5w_x<1s%Ff&%O_p+;~KGH~4%|*JeiDf>$OqS8wBCzIFSy-$QH|%A& z;$}S-uBPU^jO)8i8kc9qx_7cV1oejaUU%Kl{Uk&S`S77(=- z1XFl!+2)kRr4(8z%;FJww9KGui&8~M!bTpw(KspInnj@sgE&?0H~5NL$al+b;F#e= zipNP93|hOXZXD>2gQX%(3Kr9{`TIEwLAFyJZ9zl8;X)}U+HPZV-LvA5jWw;|+1`?) zt2LJxv%BLtLC!04%IR_b{P<$bxl=QlQU%kbo)_!ai$(@qHEZ=V8^6}5J4d!~T%W$* z6mH<$@iv6BI5>qSwkxc*kHAb#Vs8`m4-tn0OcS++2c#O8!>NOa~=JOvuaY zn#vY?s&MIq^4w#)Qj7Dt3I^$%`LvlA&B=m?@*$;{jprXsRtFk!*_#jh&2b}^1Y2Li z?tS<)J)@Qi@-5q|>-a4pM8ixh=V8V3WV7d$*I5nntX!TZS6eF|)8}}Pl5=XU^d71K zmqo*6(VK`BmEtPl4R+9&d67Fi{OmS+k5x*QbNNj6IByT*JEG~ifJ-g~E3wehU-M(;Y8Laq)cy z3R{L*m3Lfg3V1O$wsd{|*c941W})fKQt0;cW+tiZw8xd@S#)~z7LBs7RX8JWM_`eEwk*o5W2ta^iGF+7u+QA% zLa)(}wKERATAt%`;l~BKrcS<(+H7J?)6>fjrmv}oN@kA@Jql;cPtWo#^~Agl zne%l_Dt7TJ^i&cOJY2a{vZC7Pz@?O!bN=y5!l!q)cGe_6oT+T=_FdRMN|xu)eRcW6 z5rbr(sba~Hh1~MG%(pL?eUbSk`dwdyS=SG624tLV2-=`q@Or7MBGk_MSEt=#AKHYN zta<&9$Fv-bqnBSd)`{*pr&B`ftoh|$vv*>bfL26amqMX^_f+!2Lgke8TtlRA^TGW| zA|WAG=f`qSX^*pQ)X^xjkjk0lG`WY2l;{m%WoK4W1}{2J2-Q8Z7Fv998FNOvlXpN( z*0VPr;aw4!Zer}&QlBi+*_kAyayGub)7^~bI9g-&aHaY8%bT||ss+XmxrL}s1>QkC zA-ChuF?vsKdU`#tu?%N3rcDBNg{gia*E82tBx{h!#H`++>x>o|VEtzrFtrJp8-nm2t&rTsc-F=V`8u%svi&13I^ z@4maQ^MvqvSy-1QN)~E)vOg#U0Jxa{PI|si{TE5zR}DN;;(Hz8B+Cs{Q?L00s2Bb_ zphMFGh1W#m#yC!(p9tz9f=9Nlov1wT##oS}2>?o_Brhl9ly@6+l!}coE}?umFTvLg z9pL1Nnqqkw(B&dsnIZ;=q;nfqGL|-Gp@lG8fXy$hKlk1Hb$qp{ojt>&WzMpTaL?p; z1j}no{BX!})9OtiYQ%AfQZ(HMkIrv>;gj+`95gDnNVd&RuDB?v=%W(^3w6fIz^Kf6A~v)K~c zK6)Du-9?Cb$AGpI)A476*%{DgXUMl&eIvcdoef;LlOjMg&6B2SOuv$Z^6m z&JA3O9UsTT2}CPbQ|;yUR{*X%I2-BN-h8&ja<0cH_JC*W7=UXnl4=2OPsQFA3G?b1 zN&oavf^5sv7qC)}sh*r5S~IVo4I9`hx8D8;bXP%O+t}BUkni}F?5OavBZ=#p zOfC~Wl9Qk-Oz`lYmTDil>5Z?m21Jw1xa~Lg=x5IpE3My5Ead9F8yfLB?KaYJEdm{u zPFe;KS?L4!>+v&ZvuB-?Y9w!Mp3@P$dlQ-66*nN!$(g!eIhJ@f!mB4g@H4?7a1lIw zgnnGxT~nJ@s0SqYcDWd8yLY5j_kYY&n_;%}Vx{I5B(yqIBBAV+49#U~&oZ5X)!Z_vtw)^rrC2@rV=E+1Kspkz9a>ni&$W3N?BV`_4 zWnM*TW%!KNl$ Date: Fri, 3 Nov 2023 14:49:51 -0400 Subject: [PATCH 18/65] canonical links for single-user-nodes and user-settings --- .../single-node-rancher-in-docker/advanced-options.md | 4 ++++ .../single-node-rancher-in-docker/http-proxy-configuration.md | 4 ++++ docs/reference-guides/user-settings/api-keys.md | 4 ++++ .../user-settings/manage-cloud-credentials.md | 4 ++++ docs/reference-guides/user-settings/manage-node-templates.md | 4 ++++ docs/reference-guides/user-settings/user-preferences.md | 4 ++++ .../single-node-rancher-in-docker/advanced-options.md | 4 ++++ .../single-node-rancher-in-docker/http-proxy-configuration.md | 4 ++++ .../reference-guides/user-settings/api-keys.md | 4 ++++ .../user-settings/manage-cloud-credentials.md | 4 ++++ .../reference-guides/user-settings/manage-node-templates.md | 4 ++++ .../reference-guides/user-settings/user-preferences.md | 4 ++++ .../single-node-rancher-in-docker/advanced-options.md | 4 ++++ .../single-node-rancher-in-docker/http-proxy-configuration.md | 4 ++++ .../version-2.5/reference-guides/user-settings/api-keys.md | 4 ++++ .../user-settings/manage-cloud-credentials.md | 4 ++++ .../reference-guides/user-settings/manage-node-templates.md | 4 ++++ .../reference-guides/user-settings/user-preferences.md | 4 ++++ .../single-node-rancher-in-docker/advanced-options.md | 4 ++++ .../single-node-rancher-in-docker/http-proxy-configuration.md | 4 ++++ .../version-2.6/reference-guides/user-settings/api-keys.md | 4 ++++ .../user-settings/manage-cloud-credentials.md | 4 ++++ .../reference-guides/user-settings/manage-node-templates.md | 4 ++++ .../reference-guides/user-settings/user-preferences.md | 4 ++++ .../single-node-rancher-in-docker/advanced-options.md | 4 ++++ .../single-node-rancher-in-docker/http-proxy-configuration.md | 4 ++++ .../version-2.7/reference-guides/user-settings/api-keys.md | 4 ++++ .../user-settings/manage-cloud-credentials.md | 4 ++++ .../reference-guides/user-settings/manage-node-templates.md | 4 ++++ .../reference-guides/user-settings/user-preferences.md | 4 ++++ .../single-node-rancher-in-docker/advanced-options.md | 4 ++++ .../single-node-rancher-in-docker/http-proxy-configuration.md | 4 ++++ .../version-2.8/reference-guides/user-settings/api-keys.md | 4 ++++ .../user-settings/manage-cloud-credentials.md | 4 ++++ .../reference-guides/user-settings/manage-node-templates.md | 4 ++++ .../reference-guides/user-settings/user-preferences.md | 4 ++++ 36 files changed, 144 insertions(+) diff --git a/docs/reference-guides/single-node-rancher-in-docker/advanced-options.md b/docs/reference-guides/single-node-rancher-in-docker/advanced-options.md index d020910622bb..081d79e4b04b 100644 --- a/docs/reference-guides/single-node-rancher-in-docker/advanced-options.md +++ b/docs/reference-guides/single-node-rancher-in-docker/advanced-options.md @@ -2,6 +2,10 @@ title: Advanced Options for Docker Installs --- + + + + ### Custom CA Certificate If you want to configure Rancher to use a CA root certificate to be used when validating services, you would start the Rancher container sharing the directory that contains the CA root certificate. diff --git a/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 22210ccd8ff0..3bdfbf449d89 100644 --- a/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -2,6 +2,10 @@ title: HTTP Proxy Configuration --- + + + + If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below. Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy. diff --git a/docs/reference-guides/user-settings/api-keys.md b/docs/reference-guides/user-settings/api-keys.md index ade1aee3d9bc..95a26a81a458 100644 --- a/docs/reference-guides/user-settings/api-keys.md +++ b/docs/reference-guides/user-settings/api-keys.md @@ -2,6 +2,10 @@ title: API Keys --- + + + + ## API Keys and User Authentication If you want to access your Rancher clusters, projects, or other objects using external applications, you can do so using the Rancher API. However, before your application can access the API, you must provide the app with a key used to authenticate with Rancher. You can obtain a key using the Rancher UI. diff --git a/docs/reference-guides/user-settings/manage-cloud-credentials.md b/docs/reference-guides/user-settings/manage-cloud-credentials.md index e387346c02c4..fa6e4868820b 100644 --- a/docs/reference-guides/user-settings/manage-cloud-credentials.md +++ b/docs/reference-guides/user-settings/manage-cloud-credentials.md @@ -2,6 +2,10 @@ title: Managing Cloud Credentials --- + + + + When you create a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. Node templates can use cloud credentials to access the credential information required to provision nodes in the infrastructure providers. The same cloud credential can be used by multiple node templates. By using a cloud credential, you do not have to re-enter access keys for the same cloud provider. Cloud credentials are stored as Kubernetes secrets. diff --git a/docs/reference-guides/user-settings/manage-node-templates.md b/docs/reference-guides/user-settings/manage-node-templates.md index 22c29e0ae54d..fab13f80ffc2 100644 --- a/docs/reference-guides/user-settings/manage-node-templates.md +++ b/docs/reference-guides/user-settings/manage-node-templates.md @@ -2,6 +2,10 @@ title: Managing Node Templates --- + + + + When you provision a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. You can create node templates in two contexts: - While [provisioning a node pool cluster](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md). diff --git a/docs/reference-guides/user-settings/user-preferences.md b/docs/reference-guides/user-settings/user-preferences.md index 3a36abd6e18f..b784e3bb168a 100644 --- a/docs/reference-guides/user-settings/user-preferences.md +++ b/docs/reference-guides/user-settings/user-preferences.md @@ -2,6 +2,10 @@ title: User Preferences --- + + + + You can set preferences to personalize your Rancher experience. To change preference settings: 1. Click on your user avatar in the upper right corner. diff --git a/versioned_docs/version-2.0-2.4/reference-guides/single-node-rancher-in-docker/advanced-options.md b/versioned_docs/version-2.0-2.4/reference-guides/single-node-rancher-in-docker/advanced-options.md index 3e516a2fc853..b0f0994ffc5c 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/single-node-rancher-in-docker/advanced-options.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/single-node-rancher-in-docker/advanced-options.md @@ -2,6 +2,10 @@ title: Advanced Options for Docker Installs --- + + + + When installing Rancher, there are several [advanced options](../../pages-for-subheaders/resources.md) that can be enabled. ### Custom CA Certificate diff --git a/versioned_docs/version-2.0-2.4/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.0-2.4/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index cadf7494611b..d90445818461 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -2,6 +2,10 @@ title: HTTP Proxy Configuration --- + + + + If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below. Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy. diff --git a/versioned_docs/version-2.0-2.4/reference-guides/user-settings/api-keys.md b/versioned_docs/version-2.0-2.4/reference-guides/user-settings/api-keys.md index db7d4882d9bb..526fbe9cc21a 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/user-settings/api-keys.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/user-settings/api-keys.md @@ -2,6 +2,10 @@ title: API Keys --- + + + + ## API Keys and User Authentication If you want to access your Rancher clusters, projects, or other objects using external applications, you can do so using the Rancher API. However, before your application can access the API, you must provide the app with a key used to authenticate with Rancher. You can obtain a key using the Rancher UI. diff --git a/versioned_docs/version-2.0-2.4/reference-guides/user-settings/manage-cloud-credentials.md b/versioned_docs/version-2.0-2.4/reference-guides/user-settings/manage-cloud-credentials.md index eef384143261..410414bf57a2 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/user-settings/manage-cloud-credentials.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/user-settings/manage-cloud-credentials.md @@ -2,6 +2,10 @@ title: Managing Cloud Credentials --- + + + + _Available as of v2.2.0_ When you create a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. diff --git a/versioned_docs/version-2.0-2.4/reference-guides/user-settings/manage-node-templates.md b/versioned_docs/version-2.0-2.4/reference-guides/user-settings/manage-node-templates.md index d9da39eea9f3..956935ca47b8 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/user-settings/manage-node-templates.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/user-settings/manage-node-templates.md @@ -2,6 +2,10 @@ title: Managing Node Templates --- + + + + When you provision a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. You can create node templates in two contexts: - While [provisioning a node pool cluster](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md). diff --git a/versioned_docs/version-2.0-2.4/reference-guides/user-settings/user-preferences.md b/versioned_docs/version-2.0-2.4/reference-guides/user-settings/user-preferences.md index 9521d13557ce..3c86900b745b 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/user-settings/user-preferences.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/user-settings/user-preferences.md @@ -2,6 +2,10 @@ title: User Preferences --- + + + + Each user can choose preferences to personalize their Rancher experience. To change preference settings, open the **User Settings** menu and then select **Preferences**. ## Theme diff --git a/versioned_docs/version-2.5/reference-guides/single-node-rancher-in-docker/advanced-options.md b/versioned_docs/version-2.5/reference-guides/single-node-rancher-in-docker/advanced-options.md index c06f5017abd4..c21344f39b23 100644 --- a/versioned_docs/version-2.5/reference-guides/single-node-rancher-in-docker/advanced-options.md +++ b/versioned_docs/version-2.5/reference-guides/single-node-rancher-in-docker/advanced-options.md @@ -2,6 +2,10 @@ title: Advanced Options for Docker Installs --- + + + + When installing Rancher, there are several [advanced options](../../pages-for-subheaders/resources.md) that can be enabled: - [Custom CA Certificate](#custom-ca-certificate) diff --git a/versioned_docs/version-2.5/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.5/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 8ce07d55c983..2d4429894488 100644 --- a/versioned_docs/version-2.5/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.5/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -2,6 +2,10 @@ title: HTTP Proxy Configuration --- + + + + If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below. Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy. diff --git a/versioned_docs/version-2.5/reference-guides/user-settings/api-keys.md b/versioned_docs/version-2.5/reference-guides/user-settings/api-keys.md index 08e734d5f206..1987b880b2f1 100644 --- a/versioned_docs/version-2.5/reference-guides/user-settings/api-keys.md +++ b/versioned_docs/version-2.5/reference-guides/user-settings/api-keys.md @@ -2,6 +2,10 @@ title: API Keys --- + + + + ## API Keys and User Authentication If you want to access your Rancher clusters, projects, or other objects using external applications, you can do so using the Rancher API. However, before your application can access the API, you must provide the app with a key used to authenticate with Rancher. You can obtain a key using the Rancher UI. diff --git a/versioned_docs/version-2.5/reference-guides/user-settings/manage-cloud-credentials.md b/versioned_docs/version-2.5/reference-guides/user-settings/manage-cloud-credentials.md index 0932d726a766..f7aa44f1042d 100644 --- a/versioned_docs/version-2.5/reference-guides/user-settings/manage-cloud-credentials.md +++ b/versioned_docs/version-2.5/reference-guides/user-settings/manage-cloud-credentials.md @@ -2,6 +2,10 @@ title: Managing Cloud Credentials --- + + + + When you create a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. Node templates can use cloud credentials to access the credential information required to provision nodes in the infrastructure providers. The same cloud credential can be used by multiple node templates. By using a cloud credential, you do not have to re-enter access keys for the same cloud provider. Cloud credentials are stored as Kubernetes secrets. diff --git a/versioned_docs/version-2.5/reference-guides/user-settings/manage-node-templates.md b/versioned_docs/version-2.5/reference-guides/user-settings/manage-node-templates.md index be9e6b321337..3e9251ea42bb 100644 --- a/versioned_docs/version-2.5/reference-guides/user-settings/manage-node-templates.md +++ b/versioned_docs/version-2.5/reference-guides/user-settings/manage-node-templates.md @@ -2,6 +2,10 @@ title: Managing Node Templates --- + + + + When you provision a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. You can create node templates in two contexts: - While [provisioning a node pool cluster](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md). diff --git a/versioned_docs/version-2.5/reference-guides/user-settings/user-preferences.md b/versioned_docs/version-2.5/reference-guides/user-settings/user-preferences.md index b61f8f5d91cd..4af4b26da518 100644 --- a/versioned_docs/version-2.5/reference-guides/user-settings/user-preferences.md +++ b/versioned_docs/version-2.5/reference-guides/user-settings/user-preferences.md @@ -2,6 +2,10 @@ title: User Preferences --- + + + + Each user can choose preferences to personalize their Rancher experience. To change preference settings, open the **User Settings** menu and then select **Preferences**. The preferences available will differ depending on whether the **User Settings** menu was accessed while on the Cluster Manager UI or the Cluster Explorer UI. diff --git a/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/advanced-options.md b/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/advanced-options.md index d020910622bb..081d79e4b04b 100644 --- a/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/advanced-options.md +++ b/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/advanced-options.md @@ -2,6 +2,10 @@ title: Advanced Options for Docker Installs --- + + + + ### Custom CA Certificate If you want to configure Rancher to use a CA root certificate to be used when validating services, you would start the Rancher container sharing the directory that contains the CA root certificate. diff --git a/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 7e5228757217..cba7f410591b 100644 --- a/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -2,6 +2,10 @@ title: HTTP Proxy Configuration --- + + + + If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below. Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy. diff --git a/versioned_docs/version-2.6/reference-guides/user-settings/api-keys.md b/versioned_docs/version-2.6/reference-guides/user-settings/api-keys.md index ade1aee3d9bc..95a26a81a458 100644 --- a/versioned_docs/version-2.6/reference-guides/user-settings/api-keys.md +++ b/versioned_docs/version-2.6/reference-guides/user-settings/api-keys.md @@ -2,6 +2,10 @@ title: API Keys --- + + + + ## API Keys and User Authentication If you want to access your Rancher clusters, projects, or other objects using external applications, you can do so using the Rancher API. However, before your application can access the API, you must provide the app with a key used to authenticate with Rancher. You can obtain a key using the Rancher UI. diff --git a/versioned_docs/version-2.6/reference-guides/user-settings/manage-cloud-credentials.md b/versioned_docs/version-2.6/reference-guides/user-settings/manage-cloud-credentials.md index e387346c02c4..fa6e4868820b 100644 --- a/versioned_docs/version-2.6/reference-guides/user-settings/manage-cloud-credentials.md +++ b/versioned_docs/version-2.6/reference-guides/user-settings/manage-cloud-credentials.md @@ -2,6 +2,10 @@ title: Managing Cloud Credentials --- + + + + When you create a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. Node templates can use cloud credentials to access the credential information required to provision nodes in the infrastructure providers. The same cloud credential can be used by multiple node templates. By using a cloud credential, you do not have to re-enter access keys for the same cloud provider. Cloud credentials are stored as Kubernetes secrets. diff --git a/versioned_docs/version-2.6/reference-guides/user-settings/manage-node-templates.md b/versioned_docs/version-2.6/reference-guides/user-settings/manage-node-templates.md index 22c29e0ae54d..fab13f80ffc2 100644 --- a/versioned_docs/version-2.6/reference-guides/user-settings/manage-node-templates.md +++ b/versioned_docs/version-2.6/reference-guides/user-settings/manage-node-templates.md @@ -2,6 +2,10 @@ title: Managing Node Templates --- + + + + When you provision a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. You can create node templates in two contexts: - While [provisioning a node pool cluster](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md). diff --git a/versioned_docs/version-2.6/reference-guides/user-settings/user-preferences.md b/versioned_docs/version-2.6/reference-guides/user-settings/user-preferences.md index 9521d13557ce..3c86900b745b 100644 --- a/versioned_docs/version-2.6/reference-guides/user-settings/user-preferences.md +++ b/versioned_docs/version-2.6/reference-guides/user-settings/user-preferences.md @@ -2,6 +2,10 @@ title: User Preferences --- + + + + Each user can choose preferences to personalize their Rancher experience. To change preference settings, open the **User Settings** menu and then select **Preferences**. ## Theme diff --git a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/advanced-options.md b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/advanced-options.md index d020910622bb..081d79e4b04b 100644 --- a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/advanced-options.md +++ b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/advanced-options.md @@ -2,6 +2,10 @@ title: Advanced Options for Docker Installs --- + + + + ### Custom CA Certificate If you want to configure Rancher to use a CA root certificate to be used when validating services, you would start the Rancher container sharing the directory that contains the CA root certificate. diff --git a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 22210ccd8ff0..3bdfbf449d89 100644 --- a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -2,6 +2,10 @@ title: HTTP Proxy Configuration --- + + + + If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below. Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy. diff --git a/versioned_docs/version-2.7/reference-guides/user-settings/api-keys.md b/versioned_docs/version-2.7/reference-guides/user-settings/api-keys.md index ade1aee3d9bc..95a26a81a458 100644 --- a/versioned_docs/version-2.7/reference-guides/user-settings/api-keys.md +++ b/versioned_docs/version-2.7/reference-guides/user-settings/api-keys.md @@ -2,6 +2,10 @@ title: API Keys --- + + + + ## API Keys and User Authentication If you want to access your Rancher clusters, projects, or other objects using external applications, you can do so using the Rancher API. However, before your application can access the API, you must provide the app with a key used to authenticate with Rancher. You can obtain a key using the Rancher UI. diff --git a/versioned_docs/version-2.7/reference-guides/user-settings/manage-cloud-credentials.md b/versioned_docs/version-2.7/reference-guides/user-settings/manage-cloud-credentials.md index e387346c02c4..fa6e4868820b 100644 --- a/versioned_docs/version-2.7/reference-guides/user-settings/manage-cloud-credentials.md +++ b/versioned_docs/version-2.7/reference-guides/user-settings/manage-cloud-credentials.md @@ -2,6 +2,10 @@ title: Managing Cloud Credentials --- + + + + When you create a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. Node templates can use cloud credentials to access the credential information required to provision nodes in the infrastructure providers. The same cloud credential can be used by multiple node templates. By using a cloud credential, you do not have to re-enter access keys for the same cloud provider. Cloud credentials are stored as Kubernetes secrets. diff --git a/versioned_docs/version-2.7/reference-guides/user-settings/manage-node-templates.md b/versioned_docs/version-2.7/reference-guides/user-settings/manage-node-templates.md index 22c29e0ae54d..fab13f80ffc2 100644 --- a/versioned_docs/version-2.7/reference-guides/user-settings/manage-node-templates.md +++ b/versioned_docs/version-2.7/reference-guides/user-settings/manage-node-templates.md @@ -2,6 +2,10 @@ title: Managing Node Templates --- + + + + When you provision a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. You can create node templates in two contexts: - While [provisioning a node pool cluster](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md). diff --git a/versioned_docs/version-2.7/reference-guides/user-settings/user-preferences.md b/versioned_docs/version-2.7/reference-guides/user-settings/user-preferences.md index 3a36abd6e18f..b784e3bb168a 100644 --- a/versioned_docs/version-2.7/reference-guides/user-settings/user-preferences.md +++ b/versioned_docs/version-2.7/reference-guides/user-settings/user-preferences.md @@ -2,6 +2,10 @@ title: User Preferences --- + + + + You can set preferences to personalize your Rancher experience. To change preference settings: 1. Click on your user avatar in the upper right corner. diff --git a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/advanced-options.md b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/advanced-options.md index d020910622bb..081d79e4b04b 100644 --- a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/advanced-options.md +++ b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/advanced-options.md @@ -2,6 +2,10 @@ title: Advanced Options for Docker Installs --- + + + + ### Custom CA Certificate If you want to configure Rancher to use a CA root certificate to be used when validating services, you would start the Rancher container sharing the directory that contains the CA root certificate. diff --git a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 22210ccd8ff0..3bdfbf449d89 100644 --- a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -2,6 +2,10 @@ title: HTTP Proxy Configuration --- + + + + If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below. Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy. diff --git a/versioned_docs/version-2.8/reference-guides/user-settings/api-keys.md b/versioned_docs/version-2.8/reference-guides/user-settings/api-keys.md index ade1aee3d9bc..95a26a81a458 100644 --- a/versioned_docs/version-2.8/reference-guides/user-settings/api-keys.md +++ b/versioned_docs/version-2.8/reference-guides/user-settings/api-keys.md @@ -2,6 +2,10 @@ title: API Keys --- + + + + ## API Keys and User Authentication If you want to access your Rancher clusters, projects, or other objects using external applications, you can do so using the Rancher API. However, before your application can access the API, you must provide the app with a key used to authenticate with Rancher. You can obtain a key using the Rancher UI. diff --git a/versioned_docs/version-2.8/reference-guides/user-settings/manage-cloud-credentials.md b/versioned_docs/version-2.8/reference-guides/user-settings/manage-cloud-credentials.md index e387346c02c4..fa6e4868820b 100644 --- a/versioned_docs/version-2.8/reference-guides/user-settings/manage-cloud-credentials.md +++ b/versioned_docs/version-2.8/reference-guides/user-settings/manage-cloud-credentials.md @@ -2,6 +2,10 @@ title: Managing Cloud Credentials --- + + + + When you create a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. Node templates can use cloud credentials to access the credential information required to provision nodes in the infrastructure providers. The same cloud credential can be used by multiple node templates. By using a cloud credential, you do not have to re-enter access keys for the same cloud provider. Cloud credentials are stored as Kubernetes secrets. diff --git a/versioned_docs/version-2.8/reference-guides/user-settings/manage-node-templates.md b/versioned_docs/version-2.8/reference-guides/user-settings/manage-node-templates.md index 22c29e0ae54d..fab13f80ffc2 100644 --- a/versioned_docs/version-2.8/reference-guides/user-settings/manage-node-templates.md +++ b/versioned_docs/version-2.8/reference-guides/user-settings/manage-node-templates.md @@ -2,6 +2,10 @@ title: Managing Node Templates --- + + + + When you provision a cluster [hosted by an infrastructure provider](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md), [node templates](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. You can create node templates in two contexts: - While [provisioning a node pool cluster](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md). diff --git a/versioned_docs/version-2.8/reference-guides/user-settings/user-preferences.md b/versioned_docs/version-2.8/reference-guides/user-settings/user-preferences.md index 3a36abd6e18f..b784e3bb168a 100644 --- a/versioned_docs/version-2.8/reference-guides/user-settings/user-preferences.md +++ b/versioned_docs/version-2.8/reference-guides/user-settings/user-preferences.md @@ -2,6 +2,10 @@ title: User Preferences --- + + + + You can set preferences to personalize your Rancher experience. To change preference settings: 1. Click on your user avatar in the upper right corner. From 4efd181f29870dd905ba85a5e41df8991b5e015a Mon Sep 17 00:00:00 2001 From: Jonathan Crowther Date: Fri, 3 Nov 2023 15:20:37 -0400 Subject: [PATCH 19/65] Update token setting page (#906) * Update token setting page * Remove unnecessary step --- .../reference-guides/about-the-api/api-tokens.md | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/versioned_docs/version-2.8/reference-guides/about-the-api/api-tokens.md b/versioned_docs/version-2.8/reference-guides/about-the-api/api-tokens.md index 5fe8a0eb5a42..ea8d8f402799 100644 --- a/versioned_docs/version-2.8/reference-guides/about-the-api/api-tokens.md +++ b/versioned_docs/version-2.8/reference-guides/about-the-api/api-tokens.md @@ -23,7 +23,6 @@ Here is the complete list of tokens that are generated with `ttl=0`: | Token | Description | | ----------------- | -------------------------------------------------------------------------------------- | -| `kubeconfig-*` | Kubeconfig token | | `kubectl-shell-*` | Access to `kubectl` shell in the browser | | `agent-*` | Token for agent deployment | | `compose-token-*` | Token for compose | @@ -34,7 +33,7 @@ Here is the complete list of tokens that are generated with `ttl=0`: ### Setting TTL on Kubeconfig Tokens -Admins can set a global time-to-live (TTL) on Kubeconfig tokens. Changing the default kubeconfig TTL can be done by navigating to global settings and setting [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) to the desired duration in minutes. The default value of [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) is 0, which means tokens never expire. +Admins can set a global time-to-live (TTL) on Kubeconfig tokens. Changing the default kubeconfig TTL can be done by navigating to global settings and setting [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) to the desired duration in minutes. The default value of [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) is 43200, which is 30 days. :::note @@ -44,9 +43,7 @@ This setting is used by all kubeconfig tokens except those created by the CLI to ### Disable Tokens in Generated Kubeconfigs -1. Set the `kubeconfig-generate-token` setting to `false`. This setting instructs Rancher to no longer automatically generate a token when a user clicks on download a kubeconfig file. Once this setting is deactivated, a generated kubeconfig will reference the [Rancher CLI](../cli-with-rancher/kubectl-utility.md#authentication-with-kubectl-and-kubeconfig-tokens-with-ttl) to retrieve a short-lived token for the cluster. When this kubeconfig is used in a client, such as `kubectl`, the Rancher CLI needs to be installed to complete the log in request. - -2. Set the `kubeconfig-token-ttl-minutes` setting to the desired duration in minutes. By default, `kubeconfig-token-ttl-minutes` is 960 (16 hours). +Set the `kubeconfig-generate-token` setting to `false`. This setting instructs Rancher to no longer automatically generate a token when a user clicks on download a kubeconfig file. Once this setting is deactivated, a generated kubeconfig will reference the [Rancher CLI](../cli-with-rancher/kubectl-utility.md#authentication-with-kubectl-and-kubeconfig-tokens-with-ttl) to retrieve a short-lived token for the cluster. When this kubeconfig is used in a client, such as `kubectl`, the Rancher CLI needs to be installed to complete the log in request. ### Token Hashing @@ -67,7 +64,6 @@ These global settings affect Rancher token behavior. | ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) | TTL in minutes on a user auth session token. | | [`kubeconfig-default-token-TTL-minutes`](#kubeconfig-default-token-ttl-minutes) | Default TTL applied to all kubeconfig tokens except those [generated by Rancher CLI](#disable-tokens-in-generated-kubeconfigs). **Introduced in version 2.6.6.** | -| [`kubeconfig-token-ttl-minutes`](#kubeconfig-token-ttl-minutes) | TTL used for tokens generated via the CLI. **Deprecated since version 2.6.6, and will be removed in 2.8.0.** This setting will be removed, and `kubeconfig-default-token-TTL-minutes` will be used for all kubeconfig tokens. | | [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) | Max TTL for all tokens except those controlled by [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes). | | [`kubeconfig-generate-token`](#kubeconfig-generate-token) | If true, automatically generate tokens when a user downloads a kubeconfig. | @@ -78,10 +74,6 @@ Time to live (TTL) duration in minutes used to determine when a user auth sessio Time to live (TTL) duration in minutes used to determine when a kubeconfig token expires. When the token is expired, the API will reject the token. This setting can not be larger than [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes). This setting applies to a token generated in a requested kubeconfig file. Except those [generated by Rancher CLI](#disable-tokens-in-generated-kubeconfigs). **Introduced in version 2.6.6**. -#### kubeconfig-token-ttl-minutes -Time to live (TTL) duration in minutes used to determine when a kubeconfig token that was generated by the CLI expires. Tokens are generated by the CLI when [`kubeconfig-generate-token`](#kubeconfig-generate-token) is false. When the token is expired, the API will reject the token. This setting can not be larger than [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes). -**Deprecated since version 2.6.6, and will be removed in 2.8.0: This setting will be replaced with the value of [`kubeconfig-default-token-TTL-minutes`](#kubeconfig-default-token-ttl-minutes).** - #### auth-token-max-ttl-minutes Maximum Time to Live (TTL) in minutes allowed for auth tokens. If a user attempts to create a token with a TTL greater than `auth-token-max-ttl-minutes`, Rancher will set the token TTL to the value of `auth-token-max-ttl-minutes`. Auth tokens are tokens created for authenticating API requests. **Changed in version 2.6.6: Applies to all kubeconfig tokens and api tokens.** From 19f5c680cba7d8bc03278a9c0c1ddf33d7818b60 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Fri, 3 Nov 2023 13:29:16 -0700 Subject: [PATCH 20/65] Harvester landing page: apply feedback from #940 --- .../version-2.8/integrations-in-rancher/harvester/harvester.md | 2 +- .../version-2.8/integrations-in-rancher/harvester/overview.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/versioned_docs/version-2.8/integrations-in-rancher/harvester/harvester.md b/versioned_docs/version-2.8/integrations-in-rancher/harvester/harvester.md index 61c46290a50f..c54b817839bd 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/harvester/harvester.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/harvester/harvester.md @@ -4,7 +4,7 @@ title: Virtualization on Kubernetes with Harvester ## Harvester -Introduced in Rancher v2.6.1, Harvester is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require users to know Kubernetes concepts, making it a more user-friendly application. +Introduced in Rancher v2.6.1, Harvester is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require knowledge of Kubernetes concepts, making it more user-friendly. ## Harvester with Rancher diff --git a/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md b/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md index 7146cc06f83c..55a9f5b16ac4 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md @@ -24,7 +24,7 @@ The [Harvester node driver](https://docs.harvesterhci.io/v1.1/rancher/node/node- Harvester allows `.ISO` images to be uploaded and displayed through the Harvester UI, but this is not supported in the Rancher UI. This is because `.ISO` images usually require additional setup that interferes with a clean deployment (without requiring user intervention), and they are not typically used in cloud environments. -Click [here](../../pages-for-subheaders/about-provisioning-drivers.md#node-drivers) for more information on node drivers in Rancher. +See [Provisioning Drivers](../../pages-for-subheaders/about-provisioning-drivers.md#node-drivers) for more information on node drivers in Rancher. ### Port Requirements From 57be582c1544f022db832582f5cef50e99480212 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Fri, 3 Nov 2023 13:38:34 -0700 Subject: [PATCH 21/65] Rancher Desktop landing page: apply feedback from #944 --- .../rancher-desktop.md | 26 ++++++++++++------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/versioned_docs/version-2.8/integrations-in-rancher/rancher-desktop.md b/versioned_docs/version-2.8/integrations-in-rancher/rancher-desktop.md index 9ab9e27d4081..a790e814c0e2 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/rancher-desktop.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/rancher-desktop.md @@ -6,23 +6,29 @@ title: Kubernetes on the Desktop with Rancher Desktop -Developing and testing cloud-native applications on your desktop requires a set of foundational blocks such as a virtual machine (if on macOS or Windows), a container run time, Kubernetes, popular utilities, etc. Installing these components individually and getting them to work with each other can get pesky at times. Rancher Desktop nicely bundles all these essential building blocks into an easily installable and manageable desktop application that offers the key features below. + +Rancher Desktop bundles together essential tools for developing and testing cloud-native applications from your desktop. + +If you're working from your local machine on apps intended for cloud environments, you normally need a lot of preparation. You need to select a container run-time, install Kubernetes and popular utilities, and possibly set up a virtual machine. Installing components individually and getting them to work together can be a time-consuming process. + +To reduce the complexity, Rancher Desktop offers teams the following key features: - Simple and easy installation on macOS, Linux and Windows operating systems. -- A ready to use light weight Kubernetes distribution (K3s) and the ability to pick Kubernetes versions. -- GUI-based cluster dashboard powered by Rancher to explore your local cluster. -- Freedom to choose between multiple container engines (dockerd(moby) vs. containerd). -- Preferences settings to configure the application to fit your needs. -- Bundled tools required for your container, Kubernetes-based development, operations workflows. -- Periodic updates to maintain the bundled tools’ versions up to date. -- Integrates with proven tools/IDEs (VS Code extensions, Skaffold, etc). +- K3s, a ready-to-use, light-weight Kubernetes distribution. +- The ability to easily switch between Kubernetes versions. +- A GUI-based cluster dashboard powered by Rancher to explore your local cluster. +- Freedom to choose your container engine: dockerd (moby) or containerd. +- Preference settings to configure the application to suit your needs. +- Bundled tools required for your container, for Kubernetes-based development, and for operation workflows. +- Periodic updates to keep bundled tools up to date. +- Integration with popular tools/IDEs, including VS Code and Skaffold. - Image & Registry access control. - Support for Docker extensions. -To learn more about Rancher Desktop, visit https://rancherdesktop.io and read the docs at https://docs.rancherdesktop.io/ +Visit the [Rancher Desktop](https://rancherdesktop.io) website and read the [docs](https://docs.rancherdesktop.io/) to learn more. To install Rancher Desktop on your machine, refer to the [installation guide](https://docs.rancherdesktop.io/getting-started/installation). ## Trying Rancher on Rancher Desktop -Rancher Desktop offers all the necessary setup and tools to make it easy for you to try out containerized and Helm-based applications. For example, you can try out Rancher Kubernetes Management platform right on your desktop using Rancher Desktop by following this [How-to guide](https://docs.rancherdesktop.io/how-to-guides/rancher-on-rancher-desktop). +Rancher Desktop offers the setup and tools you need to easily try out containerized, Helm-based applications. You can get started with the Rancher Kubernetes Management platform using Rancher Desktop, by following this [how-to guide](https://docs.rancherdesktop.io/how-to-guides/rancher-on-rancher-desktop). From 1cf050879d12db8675389510f67057f86e1b4a0b Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Fri, 3 Nov 2023 15:55:29 -0700 Subject: [PATCH 22/65] Epinio landing page: apply feedback from #945 --- .../integrations-in-rancher/elemental/elemental.md | 2 +- .../integrations-in-rancher/epinio/epinio.md | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/versioned_docs/version-2.8/integrations-in-rancher/elemental/elemental.md b/versioned_docs/version-2.8/integrations-in-rancher/elemental/elemental.md index d2c25a323f63..5e93a4b3538b 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/elemental/elemental.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/elemental/elemental.md @@ -23,5 +23,5 @@ Elemental in Rancher: ## Elemental with Rancher Prime - Deeply integrated already as GUI Extension in Rancher. -- Extends the Rancher story to OS. Working perfectly with SLE Micro for Rrancher today, in future with SLE Micro. Selling the full stack. +- Extends the Rancher story to the OS. Working perfectly with SLE Micro for Rancher today. \ No newline at end of file diff --git a/versioned_docs/version-2.8/integrations-in-rancher/epinio/epinio.md b/versioned_docs/version-2.8/integrations-in-rancher/epinio/epinio.md index 7384592fa488..fe8e4197f906 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/epinio/epinio.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/epinio/epinio.md @@ -6,17 +6,17 @@ title: Application Development Engine with Epinio -Epinio lets developers go from application sources to URL in a single step. Epinio is an Application Development Platform. It deploys on Kubernetes and lets application developers and operators work together without conflict. +Epinio is a Kubernetes-based Application Development Platform. It helps operators and developers collaborate without conflict, and accelerates the development process. With Epinio, teams can move from application sources to a live URL in a single step. ## Epinio with Rancher -Epinio's integration with Rancher lets developers quickly start using it without having to deal with the installation process or the configuration. You can install Epinio from the Apps. Currently the team is working to have Epinio available as a Rancher extension. +Epinio's integration with Rancher gives developers a jump start, without having to deal with the installation process or configuration. You can install Epinio directly from the Rancher UI's Apps page. ## Epinio with Rancher Prime -On top of the specific support service, Rancher Prime customers of Epinio should expect better integration of Epinio with other Rancher projects such as: +Rancher Prime customers can expect better integration of Epinio with other areas in the Rancher ecosystem such as: - Better integration with Rancher authentication. -- Integrating Neuvector/Kubewarden with Epinio. -- Using a custom Chart template with the right annotations to integrate with monitoring for example. -- A better service marketplace. +- Integration with Neuvector and Kubewarden. +- Custom Helm chart templates with preset annotations to seamlessly integrate with monitoring and other key tools. +- Improved service marketplace. From f3dfd92555b2f8da9ce16141c424bd565e7a89da Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Mon, 6 Nov 2023 17:36:38 -0500 Subject: [PATCH 23/65] fixed spacing problem that ruined callout (#978) --- docs/pages-for-subheaders/quick-start-guides.md | 1 + .../version-2.5/pages-for-subheaders/quick-start-guides.md | 1 + .../version-2.6/pages-for-subheaders/quick-start-guides.md | 1 + .../version-2.7/pages-for-subheaders/quick-start-guides.md | 1 + .../version-2.8/pages-for-subheaders/quick-start-guides.md | 1 + 5 files changed, 5 insertions(+) diff --git a/docs/pages-for-subheaders/quick-start-guides.md b/docs/pages-for-subheaders/quick-start-guides.md index 8a7f8028dfbc..d4f0f9e26b93 100644 --- a/docs/pages-for-subheaders/quick-start-guides.md +++ b/docs/pages-for-subheaders/quick-start-guides.md @@ -5,6 +5,7 @@ title: Rancher Deployment Quick Start Guides + :::caution The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](installation-and-upgrade.md). diff --git a/versioned_docs/version-2.5/pages-for-subheaders/quick-start-guides.md b/versioned_docs/version-2.5/pages-for-subheaders/quick-start-guides.md index 723c870846a3..35b31740bbec 100644 --- a/versioned_docs/version-2.5/pages-for-subheaders/quick-start-guides.md +++ b/versioned_docs/version-2.5/pages-for-subheaders/quick-start-guides.md @@ -5,6 +5,7 @@ title: Rancher Deployment Quick Start Guides + >**Note:** The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](installation-and-upgrade.md). Howdy buckaroos! Use this section of the docs to jump start your deployment and testing of Rancher 2.x! It contains instructions for a simple Rancher setup and some common use cases. We plan on adding more content to this section in the future. diff --git a/versioned_docs/version-2.6/pages-for-subheaders/quick-start-guides.md b/versioned_docs/version-2.6/pages-for-subheaders/quick-start-guides.md index 8a7f8028dfbc..d4f0f9e26b93 100644 --- a/versioned_docs/version-2.6/pages-for-subheaders/quick-start-guides.md +++ b/versioned_docs/version-2.6/pages-for-subheaders/quick-start-guides.md @@ -5,6 +5,7 @@ title: Rancher Deployment Quick Start Guides + :::caution The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](installation-and-upgrade.md). diff --git a/versioned_docs/version-2.7/pages-for-subheaders/quick-start-guides.md b/versioned_docs/version-2.7/pages-for-subheaders/quick-start-guides.md index 8a7f8028dfbc..d4f0f9e26b93 100644 --- a/versioned_docs/version-2.7/pages-for-subheaders/quick-start-guides.md +++ b/versioned_docs/version-2.7/pages-for-subheaders/quick-start-guides.md @@ -5,6 +5,7 @@ title: Rancher Deployment Quick Start Guides + :::caution The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](installation-and-upgrade.md). diff --git a/versioned_docs/version-2.8/pages-for-subheaders/quick-start-guides.md b/versioned_docs/version-2.8/pages-for-subheaders/quick-start-guides.md index 8a7f8028dfbc..d4f0f9e26b93 100644 --- a/versioned_docs/version-2.8/pages-for-subheaders/quick-start-guides.md +++ b/versioned_docs/version-2.8/pages-for-subheaders/quick-start-guides.md @@ -5,6 +5,7 @@ title: Rancher Deployment Quick Start Guides + :::caution The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](installation-and-upgrade.md). From 2dd001f6326453268b9b3ac279e23e9db0aae047 Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Mon, 13 Nov 2023 13:16:43 -0500 Subject: [PATCH 24/65] rm mention of Rio from 2.7 & 2.8 (#980) --- docs/faq/general-faq.md | 8 +++----- versioned_docs/version-2.7/faq/general-faq.md | 8 +++----- versioned_docs/version-2.8/faq/general-faq.md | 8 +++----- 3 files changed, 9 insertions(+), 15 deletions(-) diff --git a/docs/faq/general-faq.md b/docs/faq/general-faq.md index 417875a6c6c1..8924e761ac93 100644 --- a/docs/faq/general-faq.md +++ b/docs/faq/general-faq.md @@ -6,9 +6,9 @@ title: General FAQ -This FAQ is a work in progress designed to answers the questions our users most frequently ask about Rancher v2.x. +This FAQ is a work in progress designed to answer the questions most frequently asked about Rancher v2.x. -See [Technical FAQ](technical-items.md), for frequently asked technical questions. +See the [Technical FAQ](technical-items.md) for frequently asked technical questions.
@@ -32,9 +32,7 @@ Rancher supports Windows Server 1809 containers. For details on how to set up a **Does Rancher support Istio?** -Rancher supports [Istio.](../pages-for-subheaders/istio.md) - -Furthermore, Istio is implemented in our micro-PaaS "Rio", which works on Rancher 2.x along with any CNCF compliant Kubernetes cluster. You can read more about it [here](https://rio.io/) +Rancher supports [Istio](../pages-for-subheaders/istio.md).
diff --git a/versioned_docs/version-2.7/faq/general-faq.md b/versioned_docs/version-2.7/faq/general-faq.md index 417875a6c6c1..8924e761ac93 100644 --- a/versioned_docs/version-2.7/faq/general-faq.md +++ b/versioned_docs/version-2.7/faq/general-faq.md @@ -6,9 +6,9 @@ title: General FAQ -This FAQ is a work in progress designed to answers the questions our users most frequently ask about Rancher v2.x. +This FAQ is a work in progress designed to answer the questions most frequently asked about Rancher v2.x. -See [Technical FAQ](technical-items.md), for frequently asked technical questions. +See the [Technical FAQ](technical-items.md) for frequently asked technical questions.
@@ -32,9 +32,7 @@ Rancher supports Windows Server 1809 containers. For details on how to set up a **Does Rancher support Istio?** -Rancher supports [Istio.](../pages-for-subheaders/istio.md) - -Furthermore, Istio is implemented in our micro-PaaS "Rio", which works on Rancher 2.x along with any CNCF compliant Kubernetes cluster. You can read more about it [here](https://rio.io/) +Rancher supports [Istio](../pages-for-subheaders/istio.md).
diff --git a/versioned_docs/version-2.8/faq/general-faq.md b/versioned_docs/version-2.8/faq/general-faq.md index 417875a6c6c1..8924e761ac93 100644 --- a/versioned_docs/version-2.8/faq/general-faq.md +++ b/versioned_docs/version-2.8/faq/general-faq.md @@ -6,9 +6,9 @@ title: General FAQ -This FAQ is a work in progress designed to answers the questions our users most frequently ask about Rancher v2.x. +This FAQ is a work in progress designed to answer the questions most frequently asked about Rancher v2.x. -See [Technical FAQ](technical-items.md), for frequently asked technical questions. +See the [Technical FAQ](technical-items.md) for frequently asked technical questions.
@@ -32,9 +32,7 @@ Rancher supports Windows Server 1809 containers. For details on how to set up a **Does Rancher support Istio?** -Rancher supports [Istio.](../pages-for-subheaders/istio.md) - -Furthermore, Istio is implemented in our micro-PaaS "Rio", which works on Rancher 2.x along with any CNCF compliant Kubernetes cluster. You can read more about it [here](https://rio.io/) +Rancher supports [Istio](../pages-for-subheaders/istio.md).
From 0719859803b6cb57310b645bb31ae561b5a8568d Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Tue, 14 Nov 2023 16:14:35 -0500 Subject: [PATCH 25/65] General FAQ copyedit (#983) * General FAQ copyedit * restored intro --- docs/faq/general-faq.md | 52 +++++------------ versioned_docs/version-2.6/faq/general-faq.md | 58 ++++++------------- versioned_docs/version-2.7/faq/general-faq.md | 52 +++++------------ versioned_docs/version-2.8/faq/general-faq.md | 52 +++++------------ 4 files changed, 66 insertions(+), 148 deletions(-) diff --git a/docs/faq/general-faq.md b/docs/faq/general-faq.md index 8924e761ac93..93c58e2ab936 100644 --- a/docs/faq/general-faq.md +++ b/docs/faq/general-faq.md @@ -10,62 +10,42 @@ This FAQ is a work in progress designed to answer the questions most frequently See the [Technical FAQ](technical-items.md) for frequently asked technical questions. -
+## Does Rancher v2.x support Docker Swarm and Mesos as environment types? -**Does Rancher v2.x support Docker Swarm and Mesos as environment types?** +Swarm and Mesos are no longer selectable options when you create a new environment in Rancher v2.x. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 were running Swarm. -When creating an environment in Rancher v2.x, Swarm and Mesos will no longer be standard options you can select. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 or so are running Swarm. +## Is it possible to manage Azure Kubernetes Services with Rancher v2.x? -
+Yes. See our [Cluster Administration](../pages-for-subheaders/manage-clusters.md) guide for what Rancher features are available on AKS, as well as our [documentation on AKS](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks.md). -**Is it possible to manage Azure Kubernetes Services with Rancher v2.x?** +## Does Rancher support Windows? -Yes. +Yes. Rancher supports Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.](../pages-for-subheaders/use-windows-clusters.md) -
+## Does Rancher support Istio? -**Does Rancher support Windows?** +Yes. Rancher supports [Istio](../pages-for-subheaders/istio.md). -Rancher supports Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.](../pages-for-subheaders/use-windows-clusters.md) - -
- -**Does Rancher support Istio?** - -Rancher supports [Istio](../pages-for-subheaders/istio.md). - -
- -**Will Rancher v2.x support Hashicorp's Vault for storing secrets?** +## Will Rancher v2.x support Hashicorp's Vault for storing secrets? Secrets management is on our roadmap but we haven't assigned it to a specific release yet. -
- -**Does Rancher v2.x support RKT containers as well?** +## Does Rancher v2.x support RKT containers as well? At this time, we only support Docker. -
- -**Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes?** +## Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes? Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. Always refer to the [Rancher Support Matrix](https://rancher.com/support-maintenance-terms/) for details about what is officially supported. -
- -**Are you planning on supporting Traefik for existing setups?** +## Are you planning on supporting Traefik for existing setups? We don't currently plan on providing embedded Traefik support, but we're still exploring load-balancing approaches. -
- -**Can I import OpenShift Kubernetes clusters into v2.x?** - -Our goal is to run any upstream Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet. +## Can I import OpenShift Kubernetes clusters into v2.x? -
+Our goal is to run any Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet. -**Are you going to integrate Longhorn?** +## Is Longhorn integrated with Rancher? -Yes. Longhorn was integrated into Rancher v2.5+. +Yes. Longhorn is integrated with Rancher v2.5 and later. diff --git a/versioned_docs/version-2.6/faq/general-faq.md b/versioned_docs/version-2.6/faq/general-faq.md index c41c70a5bc25..93c58e2ab936 100644 --- a/versioned_docs/version-2.6/faq/general-faq.md +++ b/versioned_docs/version-2.6/faq/general-faq.md @@ -6,68 +6,46 @@ title: General FAQ -This FAQ is a work in progress designed to answers the questions our users most frequently ask about Rancher v2.x. +This FAQ is a work in progress designed to answer the questions most frequently asked about Rancher v2.x. -See [Technical FAQ](technical-items.md), for frequently asked technical questions. +See the [Technical FAQ](technical-items.md) for frequently asked technical questions. -
+## Does Rancher v2.x support Docker Swarm and Mesos as environment types? -**Does Rancher v2.x support Docker Swarm and Mesos as environment types?** +Swarm and Mesos are no longer selectable options when you create a new environment in Rancher v2.x. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 were running Swarm. -When creating an environment in Rancher v2.x, Swarm and Mesos will no longer be standard options you can select. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 or so are running Swarm. +## Is it possible to manage Azure Kubernetes Services with Rancher v2.x? -
+Yes. See our [Cluster Administration](../pages-for-subheaders/manage-clusters.md) guide for what Rancher features are available on AKS, as well as our [documentation on AKS](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks.md). -**Is it possible to manage Azure Kubernetes Services with Rancher v2.x?** +## Does Rancher support Windows? -Yes. +Yes. Rancher supports Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.](../pages-for-subheaders/use-windows-clusters.md) -
+## Does Rancher support Istio? -**Does Rancher support Windows?** +Yes. Rancher supports [Istio](../pages-for-subheaders/istio.md). -As of Rancher 2.3.0, we support Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.](../pages-for-subheaders/use-windows-clusters.md) - -
- -**Does Rancher support Istio?** - -As of Rancher 2.3.0, we support [Istio.](../pages-for-subheaders/istio.md) - -Furthermore, Istio is implemented in our micro-PaaS "Rio", which works on Rancher 2.x along with any CNCF compliant Kubernetes cluster. You can read more about it [here](https://rio.io/) - -
- -**Will Rancher v2.x support Hashicorp's Vault for storing secrets?** +## Will Rancher v2.x support Hashicorp's Vault for storing secrets? Secrets management is on our roadmap but we haven't assigned it to a specific release yet. -
- -**Does Rancher v2.x support RKT containers as well?** +## Does Rancher v2.x support RKT containers as well? At this time, we only support Docker. -
- -**Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes?** +## Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes? Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. Always refer to the [Rancher Support Matrix](https://rancher.com/support-maintenance-terms/) for details about what is officially supported. -
- -**Are you planning on supporting Traefik for existing setups?** +## Are you planning on supporting Traefik for existing setups? We don't currently plan on providing embedded Traefik support, but we're still exploring load-balancing approaches. -
- -**Can I import OpenShift Kubernetes clusters into v2.x?** - -Our goal is to run any upstream Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet. +## Can I import OpenShift Kubernetes clusters into v2.x? -
+Our goal is to run any Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet. -**Are you going to integrate Longhorn?** +## Is Longhorn integrated with Rancher? -Yes. Longhorn was integrated into Rancher v2.5+. +Yes. Longhorn is integrated with Rancher v2.5 and later. diff --git a/versioned_docs/version-2.7/faq/general-faq.md b/versioned_docs/version-2.7/faq/general-faq.md index 8924e761ac93..93c58e2ab936 100644 --- a/versioned_docs/version-2.7/faq/general-faq.md +++ b/versioned_docs/version-2.7/faq/general-faq.md @@ -10,62 +10,42 @@ This FAQ is a work in progress designed to answer the questions most frequently See the [Technical FAQ](technical-items.md) for frequently asked technical questions. -
+## Does Rancher v2.x support Docker Swarm and Mesos as environment types? -**Does Rancher v2.x support Docker Swarm and Mesos as environment types?** +Swarm and Mesos are no longer selectable options when you create a new environment in Rancher v2.x. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 were running Swarm. -When creating an environment in Rancher v2.x, Swarm and Mesos will no longer be standard options you can select. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 or so are running Swarm. +## Is it possible to manage Azure Kubernetes Services with Rancher v2.x? -
+Yes. See our [Cluster Administration](../pages-for-subheaders/manage-clusters.md) guide for what Rancher features are available on AKS, as well as our [documentation on AKS](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks.md). -**Is it possible to manage Azure Kubernetes Services with Rancher v2.x?** +## Does Rancher support Windows? -Yes. +Yes. Rancher supports Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.](../pages-for-subheaders/use-windows-clusters.md) -
+## Does Rancher support Istio? -**Does Rancher support Windows?** +Yes. Rancher supports [Istio](../pages-for-subheaders/istio.md). -Rancher supports Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.](../pages-for-subheaders/use-windows-clusters.md) - -
- -**Does Rancher support Istio?** - -Rancher supports [Istio](../pages-for-subheaders/istio.md). - -
- -**Will Rancher v2.x support Hashicorp's Vault for storing secrets?** +## Will Rancher v2.x support Hashicorp's Vault for storing secrets? Secrets management is on our roadmap but we haven't assigned it to a specific release yet. -
- -**Does Rancher v2.x support RKT containers as well?** +## Does Rancher v2.x support RKT containers as well? At this time, we only support Docker. -
- -**Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes?** +## Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes? Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. Always refer to the [Rancher Support Matrix](https://rancher.com/support-maintenance-terms/) for details about what is officially supported. -
- -**Are you planning on supporting Traefik for existing setups?** +## Are you planning on supporting Traefik for existing setups? We don't currently plan on providing embedded Traefik support, but we're still exploring load-balancing approaches. -
- -**Can I import OpenShift Kubernetes clusters into v2.x?** - -Our goal is to run any upstream Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet. +## Can I import OpenShift Kubernetes clusters into v2.x? -
+Our goal is to run any Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet. -**Are you going to integrate Longhorn?** +## Is Longhorn integrated with Rancher? -Yes. Longhorn was integrated into Rancher v2.5+. +Yes. Longhorn is integrated with Rancher v2.5 and later. diff --git a/versioned_docs/version-2.8/faq/general-faq.md b/versioned_docs/version-2.8/faq/general-faq.md index 8924e761ac93..93c58e2ab936 100644 --- a/versioned_docs/version-2.8/faq/general-faq.md +++ b/versioned_docs/version-2.8/faq/general-faq.md @@ -10,62 +10,42 @@ This FAQ is a work in progress designed to answer the questions most frequently See the [Technical FAQ](technical-items.md) for frequently asked technical questions. -
+## Does Rancher v2.x support Docker Swarm and Mesos as environment types? -**Does Rancher v2.x support Docker Swarm and Mesos as environment types?** +Swarm and Mesos are no longer selectable options when you create a new environment in Rancher v2.x. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 were running Swarm. -When creating an environment in Rancher v2.x, Swarm and Mesos will no longer be standard options you can select. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 or so are running Swarm. +## Is it possible to manage Azure Kubernetes Services with Rancher v2.x? -
+Yes. See our [Cluster Administration](../pages-for-subheaders/manage-clusters.md) guide for what Rancher features are available on AKS, as well as our [documentation on AKS](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks.md). -**Is it possible to manage Azure Kubernetes Services with Rancher v2.x?** +## Does Rancher support Windows? -Yes. +Yes. Rancher supports Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.](../pages-for-subheaders/use-windows-clusters.md) -
+## Does Rancher support Istio? -**Does Rancher support Windows?** +Yes. Rancher supports [Istio](../pages-for-subheaders/istio.md). -Rancher supports Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.](../pages-for-subheaders/use-windows-clusters.md) - -
- -**Does Rancher support Istio?** - -Rancher supports [Istio](../pages-for-subheaders/istio.md). - -
- -**Will Rancher v2.x support Hashicorp's Vault for storing secrets?** +## Will Rancher v2.x support Hashicorp's Vault for storing secrets? Secrets management is on our roadmap but we haven't assigned it to a specific release yet. -
- -**Does Rancher v2.x support RKT containers as well?** +## Does Rancher v2.x support RKT containers as well? At this time, we only support Docker. -
- -**Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes?** +## Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes? Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. Always refer to the [Rancher Support Matrix](https://rancher.com/support-maintenance-terms/) for details about what is officially supported. -
- -**Are you planning on supporting Traefik for existing setups?** +## Are you planning on supporting Traefik for existing setups? We don't currently plan on providing embedded Traefik support, but we're still exploring load-balancing approaches. -
- -**Can I import OpenShift Kubernetes clusters into v2.x?** - -Our goal is to run any upstream Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet. +## Can I import OpenShift Kubernetes clusters into v2.x? -
+Our goal is to run any Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet. -**Are you going to integrate Longhorn?** +## Is Longhorn integrated with Rancher? -Yes. Longhorn was integrated into Rancher v2.5+. +Yes. Longhorn is integrated with Rancher v2.5 and later. From b0435c7827d73154e61ac243e1b93bd8b77cd5c4 Mon Sep 17 00:00:00 2001 From: Sebastiaan van Steenis Date: Thu, 16 Nov 2023 17:09:43 +0100 Subject: [PATCH 26/65] Update etcd troubleshoot for etcd 3.5.7 and higher (#985) --- .../troubleshooting-etcd-nodes.md | 124 +++--------------- .../troubleshooting-etcd-nodes.md | 117 +++-------------- .../troubleshooting-etcd-nodes.md | 117 +++-------------- .../troubleshooting-etcd-nodes.md | 124 +++--------------- 4 files changed, 76 insertions(+), 406 deletions(-) diff --git a/docs/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md b/docs/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md index 9785d8e4f680..4e3faa090fd8 100644 --- a/docs/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md +++ b/docs/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md @@ -19,8 +19,8 @@ docker ps -a -f=name=etcd$ Example output: ``` -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -605a124503b9 rancher/coreos-etcd:v3.2.18 "/usr/local/bin/et..." 2 hours ago Up 2 hours etcd +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +d26adbd23643 rancher/mirrored-coreos-etcd:v3.5.7 "/usr/local/bin/etcd…" 30 minutes ago Up 30 minutes etcd ``` ## etcd Container Logging @@ -51,30 +51,13 @@ Command: docker exec etcd etcdctl member list ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list" -``` - -Example output: -``` -xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001 -xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001 -xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001 -``` - ### Check Endpoint Status The values for `RAFT TERM` should be equal and `RAFT INDEX` should be not be too far apart from each other. Command: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint status --write-out table -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint status --write-out table ``` Example output: @@ -82,9 +65,9 @@ Example output: +-----------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +-----------------+------------------+---------+---------+-----------+-----------+------------+ -| https://IP:2379 | 333ef673fc4add56 | 3.2.18 | 24 MB | false | 72 | 66887 | -| https://IP:2379 | 5feed52d940ce4cf | 3.2.18 | 24 MB | true | 72 | 66887 | -| https://IP:2379 | db6b3bdb559a848d | 3.2.18 | 25 MB | false | 72 | 66887 | +| https://IP:2379 | 333ef673fc4add56 | 3.5.7 | 24 MB | false | 72 | 66887 | +| https://IP:2379 | 5feed52d940ce4cf | 3.5.7 | 24 MB | true | 72 | 66887 | +| https://IP:2379 | db6b3bdb559a848d | 3.5.7 | 25 MB | false | 72 | 66887 | +-----------------+------------------+---------+---------+-----------+-----------+------------+ ``` @@ -92,12 +75,7 @@ Example output: Command: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint health -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd etcdctl endpoint health --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint health ``` Example output: @@ -111,17 +89,9 @@ https://IP:2379 is healthy: successfully committed proposal: took = 2.451201ms Command: ``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5"); do +for endpoint in $(docker exec etcd etcdctl member list | cut -d, -f5); do echo "Validating connection to ${endpoint}/health" - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health" -done -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5"); do - echo "Validating connection to ${endpoint}/health"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health" + docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CACERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --cert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --key $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_KEY" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) "${endpoint}/health" done ``` @@ -139,28 +109,20 @@ Validating connection to https://IP:2379/health Command: ``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f4"); do - echo "Validating connection to ${endpoint}/version"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/version" -done -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f4"); do +for endpoint in $(docker exec etcd etcdctl member list | cut -d, -f4); do echo "Validating connection to ${endpoint}/version"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/version" + docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CACERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --cert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --key $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_KEY" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) "${endpoint}/version" done ``` Example output: ``` Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} ``` ## etcd Alarms @@ -172,11 +134,6 @@ Command: docker exec etcd etcdctl alarm list ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -``` - Example output when NOSPACE alarm is triggered: ``` memberID:x alarm:NOSPACE @@ -203,12 +160,6 @@ rev=$(docker exec etcd etcdctl endpoint status --write-out json | egrep -o '"rev docker exec etcd etcdctl compact "$rev" ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -rev=$(docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT endpoint status --write-out json | egrep -o '\"revision\":[0-9]*' | egrep -o '[0-9]*'") -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT compact \"$rev\"" -``` - Example output: ``` compacted revision xxx @@ -218,12 +169,7 @@ compacted revision xxx Command: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl defrag -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd sh -c "etcdctl defrag --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','")" +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl defrag ``` Example output: @@ -237,12 +183,7 @@ Finished defragmenting etcd member[https://IP:2379] Command: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint status --write-out table -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd sh -c "etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table" +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint status --write-out table ``` Example output: @@ -250,9 +191,9 @@ Example output: +-----------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +-----------------+------------------+---------+---------+-----------+-----------+------------+ -| https://IP:2379 | e973e4419737125 | 3.2.18 | 553 kB | false | 32 | 2449410 | -| https://IP:2379 | 4a509c997b26c206 | 3.2.18 | 553 kB | false | 32 | 2449410 | -| https://IP:2379 | b217e736575e9dd3 | 3.2.18 | 553 kB | true | 32 | 2449410 | +| https://IP:2379 | e973e4419737125 | 3.5.7 | 553 kB | false | 32 | 2449410 | +| https://IP:2379 | 4a509c997b26c206 | 3.5.7 | 553 kB | false | 32 | 2449410 | +| https://IP:2379 | b217e736575e9dd3 | 3.5.7 | 553 kB | true | 32 | 2449410 | +-----------------+------------------+---------+---------+-----------+-----------+------------+ ``` @@ -267,13 +208,6 @@ docker exec etcd etcdctl alarm disarm docker exec etcd etcdctl alarm list ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm disarm" -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -``` - Example output: ``` docker exec etcd etcdctl alarm list @@ -311,11 +245,6 @@ In earlier etcd versions, you can use the API to dynamically change the log leve docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/config/local/log ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINT)/config/local/log -``` - To reset the log level back to the default (`INFO`), you can use the following command. Command: @@ -323,11 +252,6 @@ Command: docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/config/local/log ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINT)/config/local/log -``` - ## etcd Content If you want to investigate the contents of your etcd, you can either watch streaming events or you can query etcd directly, see below for examples. @@ -339,11 +263,6 @@ Command: docker exec etcd etcdctl watch --prefix /registry ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd etcdctl --endpoints=\$ETCDCTL_ENDPOINT watch --prefix /registry -``` - If you only want to see the affected keys (and not the binary data), you can append `| grep -a ^/registry` to the command to filter for keys only. ### Query etcd Directly @@ -353,11 +272,6 @@ Command: docker exec etcd etcdctl get /registry --prefix=true --keys-only ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd etcdctl --endpoints=\$ETCDCTL_ENDPOINT get /registry --prefix=true --keys-only -``` - You can process the data to get a summary of count per key, using the command below: ``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md b/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md index 72d6b470bfc8..9c3706850844 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md @@ -15,8 +15,8 @@ docker ps -a -f=name=etcd$ 输出示例: ``` -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -605a124503b9 rancher/coreos-etcd:v3.2.18 "/usr/local/bin/et..." 2 hours ago Up 2 hours etcd +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +d26adbd23643 rancher/mirrored-coreos-etcd:v3.5.7 "/usr/local/bin/etcd…" 30 minutes ago Up 30 minutes etcd ``` ## etcd 容器日志记录 @@ -47,11 +47,6 @@ docker logs etcd docker exec etcd etcdctl member list ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list" -``` - 输出示例: ``` xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001 @@ -65,12 +60,7 @@ xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001 命令: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint status --write-out table -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint status --write-out table ``` 输出示例: @@ -78,9 +68,9 @@ docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh +-----------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +-----------------+------------------+---------+---------+-----------+-----------+------------+ -| https://IP:2379 | 333ef673fc4add56 | 3.2.18 | 24 MB | false | 72 | 66887 | -| https://IP:2379 | 5feed52d940ce4cf | 3.2.18 | 24 MB | true | 72 | 66887 | -| https://IP:2379 | db6b3bdb559a848d | 3.2.18 | 25 MB | false | 72 | 66887 | +| https://IP:2379 | 333ef673fc4add56 | 3.5.7 | 24 MB | false | 72 | 66887 | +| https://IP:2379 | 5feed52d940ce4cf | 3.5.7 | 24 MB | true | 72 | 66887 | +| https://IP:2379 | db6b3bdb559a848d | 3.5.7 | 25 MB | false | 72 | 66887 | +-----------------+------------------+---------+---------+-----------+-----------+------------+ ``` @@ -88,12 +78,7 @@ docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh 命令: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint health -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd etcdctl endpoint health --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint health ``` 输出示例: @@ -107,17 +92,9 @@ https://IP:2379 is healthy: successfully committed proposal: took = 2.451201ms 命令: ``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5"); do +for endpoint in $(docker exec etcd etcdctl member list | cut -d, -f5); do echo "Validating connection to ${endpoint}/health" - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health" -done -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5"); do - echo "Validating connection to ${endpoint}/health"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health" + docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CACERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --cert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --key $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_KEY" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) "${endpoint}/health" done ``` @@ -135,28 +112,20 @@ Validating connection to https://IP:2379/health 命令: ``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f4"); do - echo "Validating connection to ${endpoint}/version"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/version" -done -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f4"); do +for endpoint in $(docker exec etcd etcdctl member list | cut -d, -f4); do echo "Validating connection to ${endpoint}/version"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/version" + docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CACERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --cert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --key $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_KEY" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) "${endpoint}/version" done ``` 输出示例: ``` Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} ``` ## etcd 告警 @@ -168,11 +137,6 @@ etcd 会触发告警(例如空间不足时)。 docker exec etcd etcdctl alarm list ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -``` - 触发 NOSPACE 告警的输出示例: ``` memberID:x alarm:NOSPACE @@ -199,12 +163,6 @@ rev=$(docker exec etcd etcdctl endpoint status --write-out json | egrep -o '"rev docker exec etcd etcdctl compact "$rev" ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -rev=$(docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT endpoint status --write-out json | egrep -o '\"revision\":[0-9]*' | egrep -o '[0-9]*'") -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT compact \"$rev\"" -``` - 输出示例: ``` compacted revision xxx @@ -214,12 +172,7 @@ compacted revision xxx 命令: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl defrag -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd sh -c "etcdctl defrag --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','")" +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl defrag ``` 输出示例: @@ -233,12 +186,7 @@ Finished defragmenting etcd member[https://IP:2379] 命令: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint status --write-out table -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd sh -c "etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table" +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint status --write-out table ``` 输出示例: @@ -246,9 +194,9 @@ docker exec etcd sh -c "etcdctl endpoint status --endpoints=$(docker exec etcd / +-----------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +-----------------+------------------+---------+---------+-----------+-----------+------------+ -| https://IP:2379 | e973e4419737125 | 3.2.18 | 553 kB | false | 32 | 2449410 | -| https://IP:2379 | 4a509c997b26c206 | 3.2.18 | 553 kB | false | 32 | 2449410 | -| https://IP:2379 | b217e736575e9dd3 | 3.2.18 | 553 kB | true | 32 | 2449410 | +| https://IP:2379 | e973e4419737125 | 3.5.7 | 553 kB | false | 32 | 2449410 | +| https://IP:2379 | 4a509c997b26c206 | 3.5.7 | 553 kB | false | 32 | 2449410 | +| https://IP:2379 | b217e736575e9dd3 | 3.5.7 | 553 kB | true | 32 | 2449410 | +-----------------+------------------+---------+---------+-----------+-----------+------------+ ``` @@ -263,13 +211,6 @@ docker exec etcd etcdctl alarm disarm docker exec etcd etcdctl alarm list ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm disarm" -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -``` - 输出示例: ``` docker exec etcd etcdctl alarm list @@ -307,11 +248,6 @@ services: docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/config/local/log ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINT)/config/local/log -``` - 要将日志级别重置回默认值 (`INFO`),你可以使用以下命令。 命令: @@ -319,11 +255,6 @@ docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{ docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/config/local/log ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINT)/config/local/log -``` - ## etcd 内容 如果要查看 etcd 的内容,你可以查看流事件,也可以直接查询 etcd。详情请参阅以下示例。 @@ -335,11 +266,6 @@ docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{ docker exec etcd etcdctl watch --prefix /registry ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd etcdctl --endpoints=\$ETCDCTL_ENDPOINT watch --prefix /registry -``` - 如果你只想查看受影响的键(而不是二进制数据),你可以将 `| grep -a ^/registry` 尾附到该命令来过滤键。 ### 直接查询 etcd @@ -349,11 +275,6 @@ docker exec etcd etcdctl --endpoints=\$ETCDCTL_ENDPOINT watch --prefix /registry docker exec etcd etcdctl get /registry --prefix=true --keys-only ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd etcdctl --endpoints=\$ETCDCTL_ENDPOINT get /registry --prefix=true --keys-only -``` - 你可以使用以下命令来处理数据,从而获取每个键的计数摘要: ``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md index 72d6b470bfc8..9c3706850844 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md @@ -15,8 +15,8 @@ docker ps -a -f=name=etcd$ 输出示例: ``` -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -605a124503b9 rancher/coreos-etcd:v3.2.18 "/usr/local/bin/et..." 2 hours ago Up 2 hours etcd +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +d26adbd23643 rancher/mirrored-coreos-etcd:v3.5.7 "/usr/local/bin/etcd…" 30 minutes ago Up 30 minutes etcd ``` ## etcd 容器日志记录 @@ -47,11 +47,6 @@ docker logs etcd docker exec etcd etcdctl member list ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list" -``` - 输出示例: ``` xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001 @@ -65,12 +60,7 @@ xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001 命令: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint status --write-out table -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint status --write-out table ``` 输出示例: @@ -78,9 +68,9 @@ docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh +-----------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +-----------------+------------------+---------+---------+-----------+-----------+------------+ -| https://IP:2379 | 333ef673fc4add56 | 3.2.18 | 24 MB | false | 72 | 66887 | -| https://IP:2379 | 5feed52d940ce4cf | 3.2.18 | 24 MB | true | 72 | 66887 | -| https://IP:2379 | db6b3bdb559a848d | 3.2.18 | 25 MB | false | 72 | 66887 | +| https://IP:2379 | 333ef673fc4add56 | 3.5.7 | 24 MB | false | 72 | 66887 | +| https://IP:2379 | 5feed52d940ce4cf | 3.5.7 | 24 MB | true | 72 | 66887 | +| https://IP:2379 | db6b3bdb559a848d | 3.5.7 | 25 MB | false | 72 | 66887 | +-----------------+------------------+---------+---------+-----------+-----------+------------+ ``` @@ -88,12 +78,7 @@ docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh 命令: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint health -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd etcdctl endpoint health --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint health ``` 输出示例: @@ -107,17 +92,9 @@ https://IP:2379 is healthy: successfully committed proposal: took = 2.451201ms 命令: ``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5"); do +for endpoint in $(docker exec etcd etcdctl member list | cut -d, -f5); do echo "Validating connection to ${endpoint}/health" - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health" -done -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5"); do - echo "Validating connection to ${endpoint}/health"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health" + docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CACERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --cert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --key $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_KEY" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) "${endpoint}/health" done ``` @@ -135,28 +112,20 @@ Validating connection to https://IP:2379/health 命令: ``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f4"); do - echo "Validating connection to ${endpoint}/version"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/version" -done -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f4"); do +for endpoint in $(docker exec etcd etcdctl member list | cut -d, -f4); do echo "Validating connection to ${endpoint}/version"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/version" + docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CACERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --cert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --key $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_KEY" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) "${endpoint}/version" done ``` 输出示例: ``` Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} ``` ## etcd 告警 @@ -168,11 +137,6 @@ etcd 会触发告警(例如空间不足时)。 docker exec etcd etcdctl alarm list ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -``` - 触发 NOSPACE 告警的输出示例: ``` memberID:x alarm:NOSPACE @@ -199,12 +163,6 @@ rev=$(docker exec etcd etcdctl endpoint status --write-out json | egrep -o '"rev docker exec etcd etcdctl compact "$rev" ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -rev=$(docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT endpoint status --write-out json | egrep -o '\"revision\":[0-9]*' | egrep -o '[0-9]*'") -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT compact \"$rev\"" -``` - 输出示例: ``` compacted revision xxx @@ -214,12 +172,7 @@ compacted revision xxx 命令: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl defrag -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd sh -c "etcdctl defrag --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','")" +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl defrag ``` 输出示例: @@ -233,12 +186,7 @@ Finished defragmenting etcd member[https://IP:2379] 命令: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint status --write-out table -``` - -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd sh -c "etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table" +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint status --write-out table ``` 输出示例: @@ -246,9 +194,9 @@ docker exec etcd sh -c "etcdctl endpoint status --endpoints=$(docker exec etcd / +-----------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +-----------------+------------------+---------+---------+-----------+-----------+------------+ -| https://IP:2379 | e973e4419737125 | 3.2.18 | 553 kB | false | 32 | 2449410 | -| https://IP:2379 | 4a509c997b26c206 | 3.2.18 | 553 kB | false | 32 | 2449410 | -| https://IP:2379 | b217e736575e9dd3 | 3.2.18 | 553 kB | true | 32 | 2449410 | +| https://IP:2379 | e973e4419737125 | 3.5.7 | 553 kB | false | 32 | 2449410 | +| https://IP:2379 | 4a509c997b26c206 | 3.5.7 | 553 kB | false | 32 | 2449410 | +| https://IP:2379 | b217e736575e9dd3 | 3.5.7 | 553 kB | true | 32 | 2449410 | +-----------------+------------------+---------+---------+-----------+-----------+------------+ ``` @@ -263,13 +211,6 @@ docker exec etcd etcdctl alarm disarm docker exec etcd etcdctl alarm list ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm disarm" -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -``` - 输出示例: ``` docker exec etcd etcdctl alarm list @@ -307,11 +248,6 @@ services: docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/config/local/log ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINT)/config/local/log -``` - 要将日志级别重置回默认值 (`INFO`),你可以使用以下命令。 命令: @@ -319,11 +255,6 @@ docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{ docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/config/local/log ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINT)/config/local/log -``` - ## etcd 内容 如果要查看 etcd 的内容,你可以查看流事件,也可以直接查询 etcd。详情请参阅以下示例。 @@ -335,11 +266,6 @@ docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{ docker exec etcd etcdctl watch --prefix /registry ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd etcdctl --endpoints=\$ETCDCTL_ENDPOINT watch --prefix /registry -``` - 如果你只想查看受影响的键(而不是二进制数据),你可以将 `| grep -a ^/registry` 尾附到该命令来过滤键。 ### 直接查询 etcd @@ -349,11 +275,6 @@ docker exec etcd etcdctl --endpoints=\$ETCDCTL_ENDPOINT watch --prefix /registry docker exec etcd etcdctl get /registry --prefix=true --keys-only ``` -如果 etcd 版本低于 3.3.x(Kubernetes 1.13.x 及更低版本)且添加节点时指定了 `--internal-address`,则使用以下命令: -``` -docker exec etcd etcdctl --endpoints=\$ETCDCTL_ENDPOINT get /registry --prefix=true --keys-only -``` - 你可以使用以下命令来处理数据,从而获取每个键的计数摘要: ``` diff --git a/versioned_docs/version-2.8/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md b/versioned_docs/version-2.8/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md index 9785d8e4f680..4e3faa090fd8 100644 --- a/versioned_docs/version-2.8/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md +++ b/versioned_docs/version-2.8/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes.md @@ -19,8 +19,8 @@ docker ps -a -f=name=etcd$ Example output: ``` -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -605a124503b9 rancher/coreos-etcd:v3.2.18 "/usr/local/bin/et..." 2 hours ago Up 2 hours etcd +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +d26adbd23643 rancher/mirrored-coreos-etcd:v3.5.7 "/usr/local/bin/etcd…" 30 minutes ago Up 30 minutes etcd ``` ## etcd Container Logging @@ -51,30 +51,13 @@ Command: docker exec etcd etcdctl member list ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list" -``` - -Example output: -``` -xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001 -xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001 -xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001 -``` - ### Check Endpoint Status The values for `RAFT TERM` should be equal and `RAFT INDEX` should be not be too far apart from each other. Command: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint status --write-out table -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint status --write-out table ``` Example output: @@ -82,9 +65,9 @@ Example output: +-----------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +-----------------+------------------+---------+---------+-----------+-----------+------------+ -| https://IP:2379 | 333ef673fc4add56 | 3.2.18 | 24 MB | false | 72 | 66887 | -| https://IP:2379 | 5feed52d940ce4cf | 3.2.18 | 24 MB | true | 72 | 66887 | -| https://IP:2379 | db6b3bdb559a848d | 3.2.18 | 25 MB | false | 72 | 66887 | +| https://IP:2379 | 333ef673fc4add56 | 3.5.7 | 24 MB | false | 72 | 66887 | +| https://IP:2379 | 5feed52d940ce4cf | 3.5.7 | 24 MB | true | 72 | 66887 | +| https://IP:2379 | db6b3bdb559a848d | 3.5.7 | 25 MB | false | 72 | 66887 | +-----------------+------------------+---------+---------+-----------+-----------+------------+ ``` @@ -92,12 +75,7 @@ Example output: Command: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint health -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd etcdctl endpoint health --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint health ``` Example output: @@ -111,17 +89,9 @@ https://IP:2379 is healthy: successfully committed proposal: took = 2.451201ms Command: ``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5"); do +for endpoint in $(docker exec etcd etcdctl member list | cut -d, -f5); do echo "Validating connection to ${endpoint}/health" - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health" -done -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5"); do - echo "Validating connection to ${endpoint}/health"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health" + docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CACERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --cert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --key $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_KEY" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) "${endpoint}/health" done ``` @@ -139,28 +109,20 @@ Validating connection to https://IP:2379/health Command: ``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f4"); do - echo "Validating connection to ${endpoint}/version"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/version" -done -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f4"); do +for endpoint in $(docker exec etcd etcdctl member list | cut -d, -f4); do echo "Validating connection to ${endpoint}/version"; - docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/version" + docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CACERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --cert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --key $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_KEY" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) "${endpoint}/version" done ``` Example output: ``` Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} Validating connection to https://IP:2380/version -{"etcdserver":"3.2.18","etcdcluster":"3.2.0"} +{"etcdserver":"3.5.7","etcdcluster":"3.5.0"} ``` ## etcd Alarms @@ -172,11 +134,6 @@ Command: docker exec etcd etcdctl alarm list ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -``` - Example output when NOSPACE alarm is triggered: ``` memberID:x alarm:NOSPACE @@ -203,12 +160,6 @@ rev=$(docker exec etcd etcdctl endpoint status --write-out json | egrep -o '"rev docker exec etcd etcdctl compact "$rev" ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -rev=$(docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT endpoint status --write-out json | egrep -o '\"revision\":[0-9]*' | egrep -o '[0-9]*'") -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT compact \"$rev\"" -``` - Example output: ``` compacted revision xxx @@ -218,12 +169,7 @@ compacted revision xxx Command: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl defrag -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd sh -c "etcdctl defrag --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','")" +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl defrag ``` Example output: @@ -237,12 +183,7 @@ Finished defragmenting etcd member[https://IP:2379] Command: ``` -docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") etcd etcdctl endpoint status --write-out table -``` - -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd sh -c "etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table" +docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint status --write-out table ``` Example output: @@ -250,9 +191,9 @@ Example output: +-----------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +-----------------+------------------+---------+---------+-----------+-----------+------------+ -| https://IP:2379 | e973e4419737125 | 3.2.18 | 553 kB | false | 32 | 2449410 | -| https://IP:2379 | 4a509c997b26c206 | 3.2.18 | 553 kB | false | 32 | 2449410 | -| https://IP:2379 | b217e736575e9dd3 | 3.2.18 | 553 kB | true | 32 | 2449410 | +| https://IP:2379 | e973e4419737125 | 3.5.7 | 553 kB | false | 32 | 2449410 | +| https://IP:2379 | 4a509c997b26c206 | 3.5.7 | 553 kB | false | 32 | 2449410 | +| https://IP:2379 | b217e736575e9dd3 | 3.5.7 | 553 kB | true | 32 | 2449410 | +-----------------+------------------+---------+---------+-----------+-----------+------------+ ``` @@ -267,13 +208,6 @@ docker exec etcd etcdctl alarm disarm docker exec etcd etcdctl alarm list ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm disarm" -docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list" -``` - Example output: ``` docker exec etcd etcdctl alarm list @@ -311,11 +245,6 @@ In earlier etcd versions, you can use the API to dynamically change the log leve docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/config/local/log ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINT)/config/local/log -``` - To reset the log level back to the default (`INFO`), you can use the following command. Command: @@ -323,11 +252,6 @@ Command: docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/config/local/log ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINT)/config/local/log -``` - ## etcd Content If you want to investigate the contents of your etcd, you can either watch streaming events or you can query etcd directly, see below for examples. @@ -339,11 +263,6 @@ Command: docker exec etcd etcdctl watch --prefix /registry ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd etcdctl --endpoints=\$ETCDCTL_ENDPOINT watch --prefix /registry -``` - If you only want to see the affected keys (and not the binary data), you can append `| grep -a ^/registry` to the command to filter for keys only. ### Query etcd Directly @@ -353,11 +272,6 @@ Command: docker exec etcd etcdctl get /registry --prefix=true --keys-only ``` -Command when using etcd version lower than 3.3.x (Kubernetes 1.13.x and lower) and `--internal-address` was specified when adding the node: -``` -docker exec etcd etcdctl --endpoints=\$ETCDCTL_ENDPOINT get /registry --prefix=true --keys-only -``` - You can process the data to get a summary of count per key, using the command below: ``` From 25771e28439ca416b81be97bbe7be5d570541d5f Mon Sep 17 00:00:00 2001 From: pdellamore Date: Thu, 16 Nov 2023 13:35:02 -0300 Subject: [PATCH 27/65] Add session management section (#981) * Add note regarding rancher pentest reports public availability This PR will add a note regarding third-party penetration test reports public disclosure. * Add session management section to rancher security best practices This PR will create a new section inside Rancher Security Best Practices adding security recommendations for RM deployments that might need additional security controls. * Apply suggestions from code review Co-authored-by: Paulo Gomes * Update docs/reference-guides/rancher-security/rancher-security-best-practices.md * Update docs/reference-guides/rancher-security/rancher-security-best-practices.md Co-authored-by: Guilherme Macedo * versioned docs --------- Co-authored-by: Pietro Dell'Amore Co-authored-by: Marty Hernandez Avedon Co-authored-by: Paulo Gomes Co-authored-by: Guilherme Macedo --- .../rancher-security/rancher-security-best-practices.md | 8 +++++++- .../rancher-security/rancher-security-best-practices.md | 8 +++++++- .../rancher-security/rancher-security-best-practices.md | 8 +++++++- 3 files changed, 21 insertions(+), 3 deletions(-) diff --git a/docs/reference-guides/rancher-security/rancher-security-best-practices.md b/docs/reference-guides/rancher-security/rancher-security-best-practices.md index a5151379dcec..065ec7b6b7d3 100644 --- a/docs/reference-guides/rancher-security/rancher-security-best-practices.md +++ b/docs/reference-guides/rancher-security/rancher-security-best-practices.md @@ -12,4 +12,10 @@ The upstream (local) Rancher instance provides information about the Rancher ver Adversaries can misuse this information to identify the running Rancher version and cross-relate it with potential bugs to exploit. If your upstream Rancher instance is publicly available on the web, use a Layer 7 firewall to block `/version` and `/rancherversion`. -See [OWASP Web Application Security Testing - Enumerate Infrastructure and Application Admin Interfaces](https://owasp.org/www-project-web-security-testing-guide/stable/4-Web_Application_Security_Testing/02-Configuration_and_Deployment_Management_Testing/05-Enumerate_Infrastructure_and_Application_Admin_Interfaces.html) for more information on protecting your server. \ No newline at end of file +See [OWASP Web Application Security Testing - Enumerate Infrastructure and Application Admin Interfaces](https://owasp.org/www-project-web-security-testing-guide/stable/4-Web_Application_Security_Testing/02-Configuration_and_Deployment_Management_Testing/05-Enumerate_Infrastructure_and_Application_Admin_Interfaces.html) for more information on protecting your server. + +### Session Management + +Some environments may require additional security controls for session management. For example, you may want to limit users' concurrent active sessions or restrict which geolocations those sessions can be initiated from. Such features are not supported by Rancher out of the box. + +If you require such features, combine Layer 7 firewalls with [external authentication providers](../../pages-for-subheaders/authentication-config.md#external-vs-local-authentication). diff --git a/versioned_docs/version-2.7/reference-guides/rancher-security/rancher-security-best-practices.md b/versioned_docs/version-2.7/reference-guides/rancher-security/rancher-security-best-practices.md index a5151379dcec..1a5cb92faa78 100644 --- a/versioned_docs/version-2.7/reference-guides/rancher-security/rancher-security-best-practices.md +++ b/versioned_docs/version-2.7/reference-guides/rancher-security/rancher-security-best-practices.md @@ -12,4 +12,10 @@ The upstream (local) Rancher instance provides information about the Rancher ver Adversaries can misuse this information to identify the running Rancher version and cross-relate it with potential bugs to exploit. If your upstream Rancher instance is publicly available on the web, use a Layer 7 firewall to block `/version` and `/rancherversion`. -See [OWASP Web Application Security Testing - Enumerate Infrastructure and Application Admin Interfaces](https://owasp.org/www-project-web-security-testing-guide/stable/4-Web_Application_Security_Testing/02-Configuration_and_Deployment_Management_Testing/05-Enumerate_Infrastructure_and_Application_Admin_Interfaces.html) for more information on protecting your server. \ No newline at end of file +See [OWASP Web Application Security Testing - Enumerate Infrastructure and Application Admin Interfaces](https://owasp.org/www-project-web-security-testing-guide/stable/4-Web_Application_Security_Testing/02-Configuration_and_Deployment_Management_Testing/05-Enumerate_Infrastructure_and_Application_Admin_Interfaces.html) for more information on protecting your server. + +### Session Management + +Some environments may require additional security controls for session management. For example, you may want to limit users' concurrent active sessions or restrict which geolocations those sessions can be initiated from. Such features are not supported by Rancher out of the box. + +If you require such features, combine Layer 7 firewalls with [external authentication providers](../../pages-for-subheaders/authentication-config.md#external-vs-local-authentication). \ No newline at end of file diff --git a/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-security-best-practices.md b/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-security-best-practices.md index a5151379dcec..1a5cb92faa78 100644 --- a/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-security-best-practices.md +++ b/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-security-best-practices.md @@ -12,4 +12,10 @@ The upstream (local) Rancher instance provides information about the Rancher ver Adversaries can misuse this information to identify the running Rancher version and cross-relate it with potential bugs to exploit. If your upstream Rancher instance is publicly available on the web, use a Layer 7 firewall to block `/version` and `/rancherversion`. -See [OWASP Web Application Security Testing - Enumerate Infrastructure and Application Admin Interfaces](https://owasp.org/www-project-web-security-testing-guide/stable/4-Web_Application_Security_Testing/02-Configuration_and_Deployment_Management_Testing/05-Enumerate_Infrastructure_and_Application_Admin_Interfaces.html) for more information on protecting your server. \ No newline at end of file +See [OWASP Web Application Security Testing - Enumerate Infrastructure and Application Admin Interfaces](https://owasp.org/www-project-web-security-testing-guide/stable/4-Web_Application_Security_Testing/02-Configuration_and_Deployment_Management_Testing/05-Enumerate_Infrastructure_and_Application_Admin_Interfaces.html) for more information on protecting your server. + +### Session Management + +Some environments may require additional security controls for session management. For example, you may want to limit users' concurrent active sessions or restrict which geolocations those sessions can be initiated from. Such features are not supported by Rancher out of the box. + +If you require such features, combine Layer 7 firewalls with [external authentication providers](../../pages-for-subheaders/authentication-config.md#external-vs-local-authentication). \ No newline at end of file From ccd59cb482948fec660b5632d4479f47ed078a5e Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Thu, 16 Nov 2023 14:51:28 -0500 Subject: [PATCH 28/65] #879 Add Project Owner to 'Project Member Can't Create Namespace' doc (#894) * 879 Add Project Owner to 'Project Member Can't Create Namespace' doc * versioned doc * Update docs/reference-guides/rancher-webhook.md Co-authored-by: Billy Tat * Update docs/reference-guides/rancher-webhook.md Co-authored-by: Michael Bolot * Apply suggestions from code review Co-authored-by: Lucas Saintarbor * Update docs/reference-guides/rancher-webhook.md * page sync * added v2.8 page * merge syntax left in file, rm'd backticks from version numbers --------- Co-authored-by: Billy Tat Co-authored-by: Michael Bolot Co-authored-by: Lucas Saintarbor --- docs/reference-guides/rancher-webhook.md | 12 ++++++------ .../version-2.7/reference-guides/rancher-webhook.md | 12 ++++++------ .../version-2.8/reference-guides/rancher-webhook.md | 12 ++++++------ 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/docs/reference-guides/rancher-webhook.md b/docs/reference-guides/rancher-webhook.md index aeb34d88b82a..800f3c92c9d6 100644 --- a/docs/reference-guides/rancher-webhook.md +++ b/docs/reference-guides/rancher-webhook.md @@ -34,11 +34,11 @@ It provides essential protection for Rancher-managed clusters, preventing securi ## What Resources Does the Webhook Validate? -An in-progress list of the resources that the webhook validates can be found in the [webhook's repo](https://github.com/rancher/webhook/blob/release/v0.4/docs.md). These docs are organized by group/version and resource (top-level header is group/version, next level header is resource). Checks specific to one version can be found by viewing the `docs.md` file associated with a particular tag (note that webhook versions prior to `v0.3.6` won't have this file). +You can find an in-progress list of the resources that the webhook validates in the [webhook's repo](https://github.com/rancher/webhook/blob/release/v0.4/docs.md). These docs are organized by group/version and resource (top-level header is group/version, next level header is resource). Checks specific to one version can be found by viewing the `docs.md` file associated with a particular tag (note that webhook versions prior to `v0.3.6` won't have this file). ## Bypassing the Webhook -Sometimes, it may be necessary to bypass Rancher's webhook validation to perform emergency restore operations, or fix other critical issues. The bypass operation is exhaustive, meaning that no webhook validations or mutations will apply when this is used. It is not possible to bypass some mutations or validations and have others still apply - they are either all bypassed, or all active. +Sometimes, you must bypass Rancher's webhook validation to perform emergency restore operations or fix other critical issues. The bypass operation is exhaustive, meaning no webhook validations or mutations apply when you use it. It is not possible to bypass some validations or mutations and have others still apply - they are either all bypassed or all active. :::danger @@ -65,7 +65,7 @@ helm upgrade --reuse-values rancher-webhook rancher-charts/rancher-webhook -n c ``` **Note:** This temporary workaround may violate an environment's security policy. This workaround also requires that port 9443 is unused on the host network. -**Note:** Helm, by default, uses a type that some webhook versions validate (secrets) to store information. In these cases, it's recommended to first directly update the deployment with the hostNetwork=true value using kubectl, and then perform the helm commands listed above to avoid drift between the helm configuration and the actual state in the cluster. +**Note:** Helm uses secrets by default. This is a datatype that some webhook versions validate to store information. In these cases, directly update the deployment with the hostNetwork=true value using kubectl, then run the helm commands listed above to avoid drift between the helm configuration and the actual state in the cluster. ### Private GKE Cluster @@ -99,10 +99,10 @@ If you roll back to Rancher v2.7.5 or earlier, you may see webhook versions that To help alleviate these issues, you can run the [adjust-downstream-webhook](https://github.com/rancherlabs/support-tools/tree/master/adjust-downstream-webhook) shell script after roll back. This script selects and installs the proper webhook version (or removes the webhook entirely) for the corresponding Rancher version. -### Project Members Can't Create Namespaces +### Project Users Can't Create Namespaces -**Note:** This affects Rancher versions `v2.7.2 - v2.7.4` +**Note:** The following affects Rancher v2.7.2 - v2.7.4. -Project users who aren't owners may not be able to create namespaces in projects. This issue is caused by Rancher automatically upgrading the webhook to a version compatible with a more recent version of Rancher than the one currently installed. +Project users may not be able to create namespaces in projects. This includes project owners. This issue is caused by Rancher automatically upgrading the webhook to a version compatible with a more recent version of Rancher than the one currently installed. To help alleviate these issues, you can run the [adjust-downstream-webhook](https://github.com/rancherlabs/support-tools/tree/master/adjust-downstream-webhook) shell script after roll back. This script selects and installs the proper webhook version (or removes the webhook entirely) for the corresponding Rancher version. diff --git a/versioned_docs/version-2.7/reference-guides/rancher-webhook.md b/versioned_docs/version-2.7/reference-guides/rancher-webhook.md index aeb34d88b82a..06b89cabdd87 100644 --- a/versioned_docs/version-2.7/reference-guides/rancher-webhook.md +++ b/versioned_docs/version-2.7/reference-guides/rancher-webhook.md @@ -34,11 +34,11 @@ It provides essential protection for Rancher-managed clusters, preventing securi ## What Resources Does the Webhook Validate? -An in-progress list of the resources that the webhook validates can be found in the [webhook's repo](https://github.com/rancher/webhook/blob/release/v0.4/docs.md). These docs are organized by group/version and resource (top-level header is group/version, next level header is resource). Checks specific to one version can be found by viewing the `docs.md` file associated with a particular tag (note that webhook versions prior to `v0.3.6` won't have this file). +You can find an in-progress list of the resources that the webhook validates in the [webhook's repo](https://github.com/rancher/webhook/blob/release/v0.4/docs.md). These docs are organized by group/version (top-level header) and resource (next level header). The checks specific to one version can be found by viewing the `docs.md` file associated with a particular tag. Note that webhook versions prior to `v0.3.6` lack this file. ## Bypassing the Webhook -Sometimes, it may be necessary to bypass Rancher's webhook validation to perform emergency restore operations, or fix other critical issues. The bypass operation is exhaustive, meaning that no webhook validations or mutations will apply when this is used. It is not possible to bypass some mutations or validations and have others still apply - they are either all bypassed, or all active. +Sometimes, you must bypass Rancher's webhook validation to perform emergency restore operations or fix other critical issues. The bypass operation is exhaustive, meaning that no webhook validations or mutations apply when you use it. It's not possible to bypass some validations or mutations and have others still apply. They are either all bypassed or all active. :::danger @@ -65,7 +65,7 @@ helm upgrade --reuse-values rancher-webhook rancher-charts/rancher-webhook -n c ``` **Note:** This temporary workaround may violate an environment's security policy. This workaround also requires that port 9443 is unused on the host network. -**Note:** Helm, by default, uses a type that some webhook versions validate (secrets) to store information. In these cases, it's recommended to first directly update the deployment with the hostNetwork=true value using kubectl, and then perform the helm commands listed above to avoid drift between the helm configuration and the actual state in the cluster. +**Note:** Helm uses secrets by default. This is a datatype that some webhook versions validate to store information. In these cases, directly update the deployment with the hostNetwork=true value using kubectl, then run the helm commands listed above to avoid drift between the helm configuration and the actual state in the cluster. ### Private GKE Cluster @@ -99,10 +99,10 @@ If you roll back to Rancher v2.7.5 or earlier, you may see webhook versions that To help alleviate these issues, you can run the [adjust-downstream-webhook](https://github.com/rancherlabs/support-tools/tree/master/adjust-downstream-webhook) shell script after roll back. This script selects and installs the proper webhook version (or removes the webhook entirely) for the corresponding Rancher version. -### Project Members Can't Create Namespaces +### Project Users Can't Create Namespaces -**Note:** This affects Rancher versions `v2.7.2 - v2.7.4` +**Note:** The following affects Rancher v2.7.2 - v2.7.4. -Project users who aren't owners may not be able to create namespaces in projects. This issue is caused by Rancher automatically upgrading the webhook to a version compatible with a more recent version of Rancher than the one currently installed. +Project users may not be able to create namespaces in projects. This includes project owners. This issue is caused by Rancher automatically upgrading the webhook to a version compatible with a more recent version of Rancher than the one currently installed. To help alleviate these issues, you can run the [adjust-downstream-webhook](https://github.com/rancherlabs/support-tools/tree/master/adjust-downstream-webhook) shell script after roll back. This script selects and installs the proper webhook version (or removes the webhook entirely) for the corresponding Rancher version. diff --git a/versioned_docs/version-2.8/reference-guides/rancher-webhook.md b/versioned_docs/version-2.8/reference-guides/rancher-webhook.md index aeb34d88b82a..800f3c92c9d6 100644 --- a/versioned_docs/version-2.8/reference-guides/rancher-webhook.md +++ b/versioned_docs/version-2.8/reference-guides/rancher-webhook.md @@ -34,11 +34,11 @@ It provides essential protection for Rancher-managed clusters, preventing securi ## What Resources Does the Webhook Validate? -An in-progress list of the resources that the webhook validates can be found in the [webhook's repo](https://github.com/rancher/webhook/blob/release/v0.4/docs.md). These docs are organized by group/version and resource (top-level header is group/version, next level header is resource). Checks specific to one version can be found by viewing the `docs.md` file associated with a particular tag (note that webhook versions prior to `v0.3.6` won't have this file). +You can find an in-progress list of the resources that the webhook validates in the [webhook's repo](https://github.com/rancher/webhook/blob/release/v0.4/docs.md). These docs are organized by group/version and resource (top-level header is group/version, next level header is resource). Checks specific to one version can be found by viewing the `docs.md` file associated with a particular tag (note that webhook versions prior to `v0.3.6` won't have this file). ## Bypassing the Webhook -Sometimes, it may be necessary to bypass Rancher's webhook validation to perform emergency restore operations, or fix other critical issues. The bypass operation is exhaustive, meaning that no webhook validations or mutations will apply when this is used. It is not possible to bypass some mutations or validations and have others still apply - they are either all bypassed, or all active. +Sometimes, you must bypass Rancher's webhook validation to perform emergency restore operations or fix other critical issues. The bypass operation is exhaustive, meaning no webhook validations or mutations apply when you use it. It is not possible to bypass some validations or mutations and have others still apply - they are either all bypassed or all active. :::danger @@ -65,7 +65,7 @@ helm upgrade --reuse-values rancher-webhook rancher-charts/rancher-webhook -n c ``` **Note:** This temporary workaround may violate an environment's security policy. This workaround also requires that port 9443 is unused on the host network. -**Note:** Helm, by default, uses a type that some webhook versions validate (secrets) to store information. In these cases, it's recommended to first directly update the deployment with the hostNetwork=true value using kubectl, and then perform the helm commands listed above to avoid drift between the helm configuration and the actual state in the cluster. +**Note:** Helm uses secrets by default. This is a datatype that some webhook versions validate to store information. In these cases, directly update the deployment with the hostNetwork=true value using kubectl, then run the helm commands listed above to avoid drift between the helm configuration and the actual state in the cluster. ### Private GKE Cluster @@ -99,10 +99,10 @@ If you roll back to Rancher v2.7.5 or earlier, you may see webhook versions that To help alleviate these issues, you can run the [adjust-downstream-webhook](https://github.com/rancherlabs/support-tools/tree/master/adjust-downstream-webhook) shell script after roll back. This script selects and installs the proper webhook version (or removes the webhook entirely) for the corresponding Rancher version. -### Project Members Can't Create Namespaces +### Project Users Can't Create Namespaces -**Note:** This affects Rancher versions `v2.7.2 - v2.7.4` +**Note:** The following affects Rancher v2.7.2 - v2.7.4. -Project users who aren't owners may not be able to create namespaces in projects. This issue is caused by Rancher automatically upgrading the webhook to a version compatible with a more recent version of Rancher than the one currently installed. +Project users may not be able to create namespaces in projects. This includes project owners. This issue is caused by Rancher automatically upgrading the webhook to a version compatible with a more recent version of Rancher than the one currently installed. To help alleviate these issues, you can run the [adjust-downstream-webhook](https://github.com/rancherlabs/support-tools/tree/master/adjust-downstream-webhook) shell script after roll back. This script selects and installs the proper webhook version (or removes the webhook entirely) for the corresponding Rancher version. From 636336365bbff953cc1f6ddca81ed1ea40926a61 Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 16 Nov 2023 15:50:37 -0500 Subject: [PATCH 29/65] 135 cloud-provider(aws): Need to correct content on cluster id in aws tagging section --- .../set-up-cloud-providers/amazon.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md index 5943ee01710e..9e15a67c1db5 100644 --- a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md +++ b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md @@ -147,17 +147,19 @@ Do not tag multiple security groups. Tagging multiple groups generates an error ::: -When you create an [Amazon EC2 Cluster](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md), the `ClusterID` is automatically configured for the created nodes. Other resources still need to be tagged manually. +When you create an [Amazon EC2 Cluster](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md), the `ClusterID` is automatically configured for the created nodes. Other resources still need to be manually tagged. Use the following tag: -**Key** = `kubernetes.io/cluster/CLUSTERID` **Value** = `owned` +**Key** = `kubernetes.io/cluster/` **Value** = `owned` -`CLUSTERID` can be any string you like, as long as it is equal across all tags set. +Setting the value of the tag to `owned` tells the cluster that all resources with this tag are owned and managed by this cluster. -Setting the value of the tag to `owned` tells the cluster that all resources with this tag are owned and managed by this cluster. If you share resources between clusters, you can change the tag to: +If you share resources between clusters, you can change the tag to: -**Key** = `kubernetes.io/cluster/CLUSTERID` **Value** = `shared`. +**Key** = `kubernetes.io/cluster/` **Value** = `shared`. + +The string value, ``, should be the Kubernetes cluster's ID. Technically, the `` value can be any string you like, as long as you use the same value across all tags set. In practice, if the `ClusterID` is automatically configured for some nodes, as it is with Amazon EC2 nodes, you should consistently use that same string across all of your resources, even if you have to manually set the tag. ### Using Amazon Elastic Container Registry (ECR) From 85fd70d7aa66a9bb9920c47c760e094300399aaa Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 16 Nov 2023 16:03:58 -0500 Subject: [PATCH 30/65] versioning --- .../set-up-cloud-providers/amazon.md | 12 +++++++----- .../set-up-cloud-providers/amazon.md | 12 +++++++----- .../set-up-cloud-providers/amazon.md | 12 +++++++----- 3 files changed, 21 insertions(+), 15 deletions(-) diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md index bf37f1fc39c6..b9c5b2525e40 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md @@ -147,17 +147,19 @@ Do not tag multiple security groups. Tagging multiple groups generates an error ::: -When you create an [Amazon EC2 Cluster](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md), the `ClusterID` is automatically configured for the created nodes. Other resources still need to be tagged manually. +When you create an [Amazon EC2 Cluster](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md), the `ClusterID` is automatically configured for the created nodes. Other resources still need to be manually tagged. Use the following tag: -**Key** = `kubernetes.io/cluster/CLUSTERID` **Value** = `owned` +**Key** = `kubernetes.io/cluster/` **Value** = `owned` -`CLUSTERID` can be any string you like, as long as it is equal across all tags set. +Setting the value of the tag to `owned` tells the cluster that all resources with this tag are owned and managed by this cluster. -Setting the value of the tag to `owned` tells the cluster that all resources with this tag are owned and managed by this cluster. If you share resources between clusters, you can change the tag to: +If you share resources between clusters, you can change the tag to: -**Key** = `kubernetes.io/cluster/CLUSTERID` **Value** = `shared`. +**Key** = `kubernetes.io/cluster/` **Value** = `shared`. + +The string value, ``, should be the Kubernetes cluster's ID. Technically, the `` value can be any string you like, as long as you use the same value across all tags set. In practice, if the `ClusterID` is automatically configured for some nodes, as it is with Amazon EC2 nodes, you should consistently use that same string across all of your resources, even if you have to manually set the tag. ### Using Amazon Elastic Container Registry (ECR) diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md index 5943ee01710e..9e15a67c1db5 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md @@ -147,17 +147,19 @@ Do not tag multiple security groups. Tagging multiple groups generates an error ::: -When you create an [Amazon EC2 Cluster](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md), the `ClusterID` is automatically configured for the created nodes. Other resources still need to be tagged manually. +When you create an [Amazon EC2 Cluster](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md), the `ClusterID` is automatically configured for the created nodes. Other resources still need to be manually tagged. Use the following tag: -**Key** = `kubernetes.io/cluster/CLUSTERID` **Value** = `owned` +**Key** = `kubernetes.io/cluster/` **Value** = `owned` -`CLUSTERID` can be any string you like, as long as it is equal across all tags set. +Setting the value of the tag to `owned` tells the cluster that all resources with this tag are owned and managed by this cluster. -Setting the value of the tag to `owned` tells the cluster that all resources with this tag are owned and managed by this cluster. If you share resources between clusters, you can change the tag to: +If you share resources between clusters, you can change the tag to: -**Key** = `kubernetes.io/cluster/CLUSTERID` **Value** = `shared`. +**Key** = `kubernetes.io/cluster/` **Value** = `shared`. + +The string value, ``, should be the Kubernetes cluster's ID. Technically, the `` value can be any string you like, as long as you use the same value across all tags set. In practice, if the `ClusterID` is automatically configured for some nodes, as it is with Amazon EC2 nodes, you should consistently use that same string across all of your resources, even if you have to manually set the tag. ### Using Amazon Elastic Container Registry (ECR) diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md index 5943ee01710e..9e15a67c1db5 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md @@ -147,17 +147,19 @@ Do not tag multiple security groups. Tagging multiple groups generates an error ::: -When you create an [Amazon EC2 Cluster](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md), the `ClusterID` is automatically configured for the created nodes. Other resources still need to be tagged manually. +When you create an [Amazon EC2 Cluster](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md), the `ClusterID` is automatically configured for the created nodes. Other resources still need to be manually tagged. Use the following tag: -**Key** = `kubernetes.io/cluster/CLUSTERID` **Value** = `owned` +**Key** = `kubernetes.io/cluster/` **Value** = `owned` -`CLUSTERID` can be any string you like, as long as it is equal across all tags set. +Setting the value of the tag to `owned` tells the cluster that all resources with this tag are owned and managed by this cluster. -Setting the value of the tag to `owned` tells the cluster that all resources with this tag are owned and managed by this cluster. If you share resources between clusters, you can change the tag to: +If you share resources between clusters, you can change the tag to: -**Key** = `kubernetes.io/cluster/CLUSTERID` **Value** = `shared`. +**Key** = `kubernetes.io/cluster/` **Value** = `shared`. + +The string value, ``, should be the Kubernetes cluster's ID. Technically, the `` value can be any string you like, as long as you use the same value across all tags set. In practice, if the `ClusterID` is automatically configured for some nodes, as it is with Amazon EC2 nodes, you should consistently use that same string across all of your resources, even if you have to manually set the tag. ### Using Amazon Elastic Container Registry (ECR) From b7f1b59fa92512018b3cee7d738ca6e939201d8f Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Fri, 17 Nov 2023 07:18:17 -0800 Subject: [PATCH 31/65] Add delete permission and remove duplicate header (#989) --- .../aks.md | 7 +++---- .../aks.md | 7 +++---- .../aks.md | 7 +++---- 3 files changed, 9 insertions(+), 12 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md index f176acc177b6..c3fac126e643 100644 --- a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md +++ b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md @@ -64,8 +64,6 @@ az ad sp create-for-rbac \ --role Contributor ``` -### Setting Up the Service Principal with the Azure Command Line Tool - Create the Resource Group by running this command: ``` @@ -176,7 +174,7 @@ For more information about connecting to an AKS private cluster, see the [AKS do "IsCustom": true, "Description": "Everything needed by Rancher AKSv2 operator", "Actions": [ - "Microsoft.Compute/disks/delete", + "Microsoft.Compute/disks/delete", "Microsoft.Compute/disks/read", "Microsoft.Compute/disks/write", "Microsoft.Compute/diskEncryptionSets/read", @@ -199,11 +197,12 @@ For more information about connecting to an AKS private cluster, see the [AKS do "Microsoft.Compute/virtualMachines/read", "Microsoft.Compute/virtualMachines/write", "Microsoft.ContainerService/managedClusters/read", - "Microsoft.ContainerService/managedClusters/write" + "Microsoft.ContainerService/managedClusters/write", "Microsoft.ContainerService/managedClusters/delete", "Microsoft.ContainerService/managedClusters/accessProfiles/listCredential/action", "Microsoft.ContainerService/managedClusters/agentPools/read", "Microsoft.ContainerService/managedClusters/agentPools/write", + "Microsoft.ContainerService/managedClusters/agentPools/delete", "Microsoft.ManagedIdentity/userAssignedIdentities/assign/action", "Microsoft.Network/applicationGateways/read", "Microsoft.Network/applicationGateways/write", diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md index f176acc177b6..c3fac126e643 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md @@ -64,8 +64,6 @@ az ad sp create-for-rbac \ --role Contributor ``` -### Setting Up the Service Principal with the Azure Command Line Tool - Create the Resource Group by running this command: ``` @@ -176,7 +174,7 @@ For more information about connecting to an AKS private cluster, see the [AKS do "IsCustom": true, "Description": "Everything needed by Rancher AKSv2 operator", "Actions": [ - "Microsoft.Compute/disks/delete", + "Microsoft.Compute/disks/delete", "Microsoft.Compute/disks/read", "Microsoft.Compute/disks/write", "Microsoft.Compute/diskEncryptionSets/read", @@ -199,11 +197,12 @@ For more information about connecting to an AKS private cluster, see the [AKS do "Microsoft.Compute/virtualMachines/read", "Microsoft.Compute/virtualMachines/write", "Microsoft.ContainerService/managedClusters/read", - "Microsoft.ContainerService/managedClusters/write" + "Microsoft.ContainerService/managedClusters/write", "Microsoft.ContainerService/managedClusters/delete", "Microsoft.ContainerService/managedClusters/accessProfiles/listCredential/action", "Microsoft.ContainerService/managedClusters/agentPools/read", "Microsoft.ContainerService/managedClusters/agentPools/write", + "Microsoft.ContainerService/managedClusters/agentPools/delete", "Microsoft.ManagedIdentity/userAssignedIdentities/assign/action", "Microsoft.Network/applicationGateways/read", "Microsoft.Network/applicationGateways/write", diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md index f176acc177b6..c3fac126e643 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md @@ -64,8 +64,6 @@ az ad sp create-for-rbac \ --role Contributor ``` -### Setting Up the Service Principal with the Azure Command Line Tool - Create the Resource Group by running this command: ``` @@ -176,7 +174,7 @@ For more information about connecting to an AKS private cluster, see the [AKS do "IsCustom": true, "Description": "Everything needed by Rancher AKSv2 operator", "Actions": [ - "Microsoft.Compute/disks/delete", + "Microsoft.Compute/disks/delete", "Microsoft.Compute/disks/read", "Microsoft.Compute/disks/write", "Microsoft.Compute/diskEncryptionSets/read", @@ -199,11 +197,12 @@ For more information about connecting to an AKS private cluster, see the [AKS do "Microsoft.Compute/virtualMachines/read", "Microsoft.Compute/virtualMachines/write", "Microsoft.ContainerService/managedClusters/read", - "Microsoft.ContainerService/managedClusters/write" + "Microsoft.ContainerService/managedClusters/write", "Microsoft.ContainerService/managedClusters/delete", "Microsoft.ContainerService/managedClusters/accessProfiles/listCredential/action", "Microsoft.ContainerService/managedClusters/agentPools/read", "Microsoft.ContainerService/managedClusters/agentPools/write", + "Microsoft.ContainerService/managedClusters/agentPools/delete", "Microsoft.ManagedIdentity/userAssignedIdentities/assign/action", "Microsoft.Network/applicationGateways/read", "Microsoft.Network/applicationGateways/write", From 631c5e485b3f1c6cb05095d311e45e2b835aea3e Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Fri, 17 Nov 2023 10:19:19 -0500 Subject: [PATCH 32/65] Apply suggestions from code review Co-authored-by: Billy Tat --- .../set-up-cloud-providers/amazon.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md index 9e15a67c1db5..10700b418d4e 100644 --- a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md +++ b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md @@ -159,7 +159,7 @@ If you share resources between clusters, you can change the tag to: **Key** = `kubernetes.io/cluster/` **Value** = `shared`. -The string value, ``, should be the Kubernetes cluster's ID. Technically, the `` value can be any string you like, as long as you use the same value across all tags set. In practice, if the `ClusterID` is automatically configured for some nodes, as it is with Amazon EC2 nodes, you should consistently use that same string across all of your resources, even if you have to manually set the tag. +The string value, ``, is the Kubernetes cluster's ID. ### Using Amazon Elastic Container Registry (ECR) From 42e24305a4d3e315f6244bf65ee11c9f90ca593c Mon Sep 17 00:00:00 2001 From: martyav Date: Fri, 17 Nov 2023 10:30:34 -0500 Subject: [PATCH 33/65] Amazon only uses EC2, so 'use anything' is a moot point --- .../set-up-cloud-providers/amazon.md | 2 +- .../set-up-cloud-providers/amazon.md | 2 +- .../set-up-cloud-providers/amazon.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md index b9c5b2525e40..e72de8b8b033 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md @@ -159,7 +159,7 @@ If you share resources between clusters, you can change the tag to: **Key** = `kubernetes.io/cluster/` **Value** = `shared`. -The string value, ``, should be the Kubernetes cluster's ID. Technically, the `` value can be any string you like, as long as you use the same value across all tags set. In practice, if the `ClusterID` is automatically configured for some nodes, as it is with Amazon EC2 nodes, you should consistently use that same string across all of your resources, even if you have to manually set the tag. +The string value, ``, is the Kubernetes cluster's ID. ### Using Amazon Elastic Container Registry (ECR) diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md index 9e15a67c1db5..10700b418d4e 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md @@ -159,7 +159,7 @@ If you share resources between clusters, you can change the tag to: **Key** = `kubernetes.io/cluster/` **Value** = `shared`. -The string value, ``, should be the Kubernetes cluster's ID. Technically, the `` value can be any string you like, as long as you use the same value across all tags set. In practice, if the `ClusterID` is automatically configured for some nodes, as it is with Amazon EC2 nodes, you should consistently use that same string across all of your resources, even if you have to manually set the tag. +The string value, ``, is the Kubernetes cluster's ID. ### Using Amazon Elastic Container Registry (ECR) diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md index 9e15a67c1db5..10700b418d4e 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md @@ -159,7 +159,7 @@ If you share resources between clusters, you can change the tag to: **Key** = `kubernetes.io/cluster/` **Value** = `shared`. -The string value, ``, should be the Kubernetes cluster's ID. Technically, the `` value can be any string you like, as long as you use the same value across all tags set. In practice, if the `ClusterID` is automatically configured for some nodes, as it is with Amazon EC2 nodes, you should consistently use that same string across all of your resources, even if you have to manually set the tag. +The string value, ``, is the Kubernetes cluster's ID. ### Using Amazon Elastic Container Registry (ECR) From a49000ef2223cc57701c565e854c2b0f83e544c7 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Mon, 20 Nov 2023 13:25:41 -0800 Subject: [PATCH 34/65] Update CNI popularity chart numbers. Credit to @amitmavgupta for original PR. --- .../faq/container-network-interface-providers.md | 16 +++++++--------- .../faq/container-network-interface-providers.md | 15 ++++++--------- .../faq/container-network-interface-providers.md | 15 ++++++--------- .../faq/container-network-interface-providers.md | 15 ++++++--------- 4 files changed, 25 insertions(+), 36 deletions(-) diff --git a/docs/faq/container-network-interface-providers.md b/docs/faq/container-network-interface-providers.md index 490713c7b5a7..bd7bb49fae22 100644 --- a/docs/faq/container-network-interface-providers.md +++ b/docs/faq/container-network-interface-providers.md @@ -182,20 +182,18 @@ The following table summarizes the different features available for each CNI net - Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications. - + ## CNI Community Popularity -The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2022. +The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in November 2023. | Provider | Project | Stars | Forks | Contributors | | ---- | ---- | ---- | ---- | ---- | -| Canal | https://github.com/projectcalico/canal | 679 | 100 | 21 | -| Flannel | https://github.com/flannel-io/flannel | 7k | 2.5k | 185 | -| Calico | https://github.com/projectcalico/calico | 3.1k | 741 | 224 | -| Weave | https://github.com/weaveworks/weave/ | 6.2k | 635 | 84 | -| Cilium | https://github.com/cilium/cilium | 10.6k | 1.3k | 352 | - -
+| Canal | https://github.com/projectcalico/canal | 707 | 104 | 20 | +| Flannel | https://github.com/flannel-io/flannel | 8.3k | 2.9k | 225 | +| Calico | https://github.com/projectcalico/calico | 5.1k | 1.2k | 328 | +| Weave | https://github.com/weaveworks/weave/ | 6.5k | 672 | 87 | +| Cilium | https://github.com/cilium/cilium | 17.1k | 2.5k | 677 | ## Which CNI Provider Should I Use? diff --git a/versioned_docs/version-2.6/faq/container-network-interface-providers.md b/versioned_docs/version-2.6/faq/container-network-interface-providers.md index 490713c7b5a7..2f4809d249e5 100644 --- a/versioned_docs/version-2.6/faq/container-network-interface-providers.md +++ b/versioned_docs/version-2.6/faq/container-network-interface-providers.md @@ -182,20 +182,17 @@ The following table summarizes the different features available for each CNI net - Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications. - ## CNI Community Popularity -The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2022. +The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in November 2023. | Provider | Project | Stars | Forks | Contributors | | ---- | ---- | ---- | ---- | ---- | -| Canal | https://github.com/projectcalico/canal | 679 | 100 | 21 | -| Flannel | https://github.com/flannel-io/flannel | 7k | 2.5k | 185 | -| Calico | https://github.com/projectcalico/calico | 3.1k | 741 | 224 | -| Weave | https://github.com/weaveworks/weave/ | 6.2k | 635 | 84 | -| Cilium | https://github.com/cilium/cilium | 10.6k | 1.3k | 352 | - -
+| Canal | https://github.com/projectcalico/canal | 707 | 104 | 20 | +| Flannel | https://github.com/flannel-io/flannel | 8.3k | 2.9k | 225 | +| Calico | https://github.com/projectcalico/calico | 5.1k | 1.2k | 328 | +| Weave | https://github.com/weaveworks/weave/ | 6.5k | 672 | 87 | +| Cilium | https://github.com/cilium/cilium | 17.1k | 2.5k | 677 | ## Which CNI Provider Should I Use? diff --git a/versioned_docs/version-2.7/faq/container-network-interface-providers.md b/versioned_docs/version-2.7/faq/container-network-interface-providers.md index 490713c7b5a7..2f4809d249e5 100644 --- a/versioned_docs/version-2.7/faq/container-network-interface-providers.md +++ b/versioned_docs/version-2.7/faq/container-network-interface-providers.md @@ -182,20 +182,17 @@ The following table summarizes the different features available for each CNI net - Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications. - ## CNI Community Popularity -The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2022. +The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in November 2023. | Provider | Project | Stars | Forks | Contributors | | ---- | ---- | ---- | ---- | ---- | -| Canal | https://github.com/projectcalico/canal | 679 | 100 | 21 | -| Flannel | https://github.com/flannel-io/flannel | 7k | 2.5k | 185 | -| Calico | https://github.com/projectcalico/calico | 3.1k | 741 | 224 | -| Weave | https://github.com/weaveworks/weave/ | 6.2k | 635 | 84 | -| Cilium | https://github.com/cilium/cilium | 10.6k | 1.3k | 352 | - -
+| Canal | https://github.com/projectcalico/canal | 707 | 104 | 20 | +| Flannel | https://github.com/flannel-io/flannel | 8.3k | 2.9k | 225 | +| Calico | https://github.com/projectcalico/calico | 5.1k | 1.2k | 328 | +| Weave | https://github.com/weaveworks/weave/ | 6.5k | 672 | 87 | +| Cilium | https://github.com/cilium/cilium | 17.1k | 2.5k | 677 | ## Which CNI Provider Should I Use? diff --git a/versioned_docs/version-2.8/faq/container-network-interface-providers.md b/versioned_docs/version-2.8/faq/container-network-interface-providers.md index 490713c7b5a7..2f4809d249e5 100644 --- a/versioned_docs/version-2.8/faq/container-network-interface-providers.md +++ b/versioned_docs/version-2.8/faq/container-network-interface-providers.md @@ -182,20 +182,17 @@ The following table summarizes the different features available for each CNI net - Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications. - ## CNI Community Popularity -The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2022. +The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in November 2023. | Provider | Project | Stars | Forks | Contributors | | ---- | ---- | ---- | ---- | ---- | -| Canal | https://github.com/projectcalico/canal | 679 | 100 | 21 | -| Flannel | https://github.com/flannel-io/flannel | 7k | 2.5k | 185 | -| Calico | https://github.com/projectcalico/calico | 3.1k | 741 | 224 | -| Weave | https://github.com/weaveworks/weave/ | 6.2k | 635 | 84 | -| Cilium | https://github.com/cilium/cilium | 10.6k | 1.3k | 352 | - -
+| Canal | https://github.com/projectcalico/canal | 707 | 104 | 20 | +| Flannel | https://github.com/flannel-io/flannel | 8.3k | 2.9k | 225 | +| Calico | https://github.com/projectcalico/calico | 5.1k | 1.2k | 328 | +| Weave | https://github.com/weaveworks/weave/ | 6.5k | 672 | 87 | +| Cilium | https://github.com/cilium/cilium | 17.1k | 2.5k | 677 | ## Which CNI Provider Should I Use? From 8faa4cce4876a805228c24cfb54f9bf8294f7c1f Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Mon, 20 Nov 2023 14:15:47 -0800 Subject: [PATCH 35/65] Update Swagger file --- openapi/swagger.json | 9804 +++++++++++++++++++++--------------------- 1 file changed, 4902 insertions(+), 4902 deletions(-) diff --git a/openapi/swagger.json b/openapi/swagger.json index 8a7709eed766..91df07370c6b 100644 --- a/openapi/swagger.json +++ b/openapi/swagger.json @@ -1,46 +1,144 @@ { - "swagger": "2.0", - "info": { - "title": "Kubernetes", - "version": "v1.27.5+k3s1" - }, - "paths": { - "/apis/management.cattle.io/v3/clusterroletemplatebindings": { - "get": { - "description": "list objects of kind ClusterRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "listManagementCattleIoV3ClusterRoleTemplateBindingForAllNamespaces", - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBindingList" - } - }, - "401": { - "description": "Unauthorized" + "swagger": "2.0", + "info": { + "title": "Kubernetes", + "version": "v1.27.5+k3s1" + }, + "paths": { + "/apis/management.cattle.io/v3/clusterroletemplatebindings": { + "get": { + "description": "list objects of kind ClusterRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "listManagementCattleIoV3ClusterRoleTemplateBindingForAllNamespaces", + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBindingList" } }, - "x-kubernetes-action": "list", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ClusterRoleTemplateBinding", - "version": "v3" + "401": { + "description": "Unauthorized" } }, + "x-kubernetes-action": "list", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ClusterRoleTemplateBinding", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/globalrolebindings": { + "get": { + "description": "list objects of kind GlobalRoleBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "listManagementCattleIoV3GlobalRoleBinding", "parameters": [ { "uniqueItems": true, @@ -77,13 +175,6 @@ "name": "limit", "in": "query" }, - { - "uniqueItems": true, - "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", - "in": "query" - }, { "uniqueItems": true, "type": "string", @@ -119,3042 +210,710 @@ "name": "watch", "in": "query" } - ] - }, - "/apis/management.cattle.io/v3/globalrolebindings": { - "get": { - "description": "list objects of kind GlobalRoleBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "listManagementCattleIoV3GlobalRoleBinding", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBindingList" - } - }, - "401": { - "description": "Unauthorized" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBindingList" } }, - "x-kubernetes-action": "list", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRoleBinding", - "version": "v3" + "401": { + "description": "Unauthorized" } }, - "post": { - "description": "create a GlobalRoleBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "createManagementCattleIoV3GlobalRoleBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" - } - }, - "401": { - "description": "Unauthorized" + "x-kubernetes-action": "list", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRoleBinding", + "version": "v3" + } + }, + "post": { + "description": "create a GlobalRoleBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "createManagementCattleIoV3GlobalRoleBinding", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" } }, - "x-kubernetes-action": "post", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRoleBinding", - "version": "v3" - } - }, - "delete": { - "description": "delete collection of GlobalRoleBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3CollectionGlobalRoleBinding", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" - } + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, - "x-kubernetes-action": "deletecollection", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRoleBinding", - "version": "v3" - } - }, - "parameters": [ { "uniqueItems": true, "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", "in": "query" - } - ] - }, - "/apis/management.cattle.io/v3/globalrolebindings/{name}": { - "get": { - "description": "read the specified GlobalRoleBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "readManagementCattleIoV3GlobalRoleBinding", - "parameters": [ - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" - } - }, - "401": { - "description": "Unauthorized" - } }, - "x-kubernetes-action": "get", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRoleBinding", - "version": "v3" + { + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" } - }, - "put": { - "description": "replace the specified GlobalRoleBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "replaceManagementCattleIoV3GlobalRoleBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" - } - }, - "401": { - "description": "Unauthorized" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" } }, - "x-kubernetes-action": "put", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRoleBinding", - "version": "v3" - } - }, - "delete": { - "description": "delete a GlobalRoleBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3GlobalRoleBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", - "name": "gracePeriodSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", - "name": "orphanDependents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", - "name": "propagationPolicy", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" + "201": { + "description": "Created", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" } }, - "x-kubernetes-action": "delete", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRoleBinding", - "version": "v3" - } - }, - "patch": { - "description": "partially update the specified GlobalRoleBinding", - "consumes": [ - "application/json-patch+json", - "application/merge-patch+json", - "application/apply-patch+yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "patchManagementCattleIoV3GlobalRoleBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", - "name": "force", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" - } - }, - "401": { - "description": "Unauthorized" + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" } }, - "x-kubernetes-action": "patch", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRoleBinding", - "version": "v3" + "401": { + "description": "Unauthorized" } }, + "x-kubernetes-action": "post", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRoleBinding", + "version": "v3" + } + }, + "delete": { + "description": "delete collection of GlobalRoleBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3CollectionGlobalRoleBinding", "parameters": [ { "uniqueItems": true, - "type": "string", - "description": "name of the GlobalRoleBinding", - "name": "name", - "in": "path", - "required": true + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", "in": "query" - } - ] - }, - "/apis/management.cattle.io/v3/globalroles": { - "get": { - "description": "list objects of kind GlobalRole", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "listManagementCattleIoV3GlobalRole", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleList" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "list", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRole", - "version": "v3" - } - }, - "post": { - "description": "create a GlobalRole", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "createManagementCattleIoV3GlobalRole", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" - } - }, - "401": { - "description": "Unauthorized" - } }, - "x-kubernetes-action": "post", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRole", - "version": "v3" - } - }, - "delete": { - "description": "delete collection of GlobalRole", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3CollectionGlobalRole", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" - } + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" }, - "x-kubernetes-action": "deletecollection", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRole", - "version": "v3" - } - }, - "parameters": [ { "uniqueItems": true, "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", "in": "query" - } - ] - }, - "/apis/management.cattle.io/v3/globalroles/{name}": { - "get": { - "description": "read the specified GlobalRole", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "readManagementCattleIoV3GlobalRole", - "parameters": [ - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" - } - }, - "401": { - "description": "Unauthorized" - } }, - "x-kubernetes-action": "get", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRole", - "version": "v3" - } - }, - "put": { - "description": "replace the specified GlobalRole", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "replaceManagementCattleIoV3GlobalRole", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" - } - }, - "401": { - "description": "Unauthorized" - } + { + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" }, - "x-kubernetes-action": "put", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRole", - "version": "v3" + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" } - }, - "delete": { - "description": "delete a GlobalRole", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3GlobalRole", - "parameters": [ - { - "name": "body", - "in": "body", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", - "name": "gracePeriodSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", - "name": "orphanDependents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", - "name": "propagationPolicy", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" } }, - "x-kubernetes-action": "delete", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRole", - "version": "v3" + "401": { + "description": "Unauthorized" } }, - "patch": { - "description": "partially update the specified GlobalRole", - "consumes": [ - "application/json-patch+json", - "application/merge-patch+json", - "application/apply-patch+yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "patchManagementCattleIoV3GlobalRole", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", - "name": "force", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" - } - }, - "401": { - "description": "Unauthorized" + "x-kubernetes-action": "deletecollection", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRoleBinding", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/globalrolebindings/{name}": { + "get": { + "description": "read the specified GlobalRoleBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "readManagementCattleIoV3GlobalRoleBinding", + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" } }, - "x-kubernetes-action": "patch", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "GlobalRole", - "version": "v3" + "401": { + "description": "Unauthorized" } }, + "x-kubernetes-action": "get", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRoleBinding", + "version": "v3" + } + }, + "put": { + "description": "replace the specified GlobalRoleBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "replaceManagementCattleIoV3GlobalRoleBinding", "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" + } + }, + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" + }, { "uniqueItems": true, "type": "string", - "description": "name of the GlobalRole", - "name": "name", - "in": "path", - "required": true + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", + "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", "in": "query" } - ] - }, - "/apis/management.cattle.io/v3/namespaces/{namespace}/clusterroletemplatebindings": { - "get": { - "description": "list objects of kind ClusterRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "listManagementCattleIoV3NamespacedClusterRoleTemplateBinding", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBindingList" - } - }, - "401": { - "description": "Unauthorized" + }, + "201": { + "description": "Created", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" } }, - "x-kubernetes-action": "list", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ClusterRoleTemplateBinding", - "version": "v3" + "401": { + "description": "Unauthorized" } }, - "post": { - "description": "create a ClusterRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "createManagementCattleIoV3NamespacedClusterRoleTemplateBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" - } - }, - "401": { - "description": "Unauthorized" + "x-kubernetes-action": "put", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRoleBinding", + "version": "v3" + } + }, + "delete": { + "description": "delete a GlobalRoleBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3GlobalRoleBinding", + "parameters": [ + { + "name": "body", + "in": "body", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" } }, - "x-kubernetes-action": "post", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ClusterRoleTemplateBinding", - "version": "v3" + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", + "name": "gracePeriodSeconds", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", + "name": "orphanDependents", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", + "name": "propagationPolicy", + "in": "query" } - }, - "delete": { - "description": "delete collection of ClusterRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3CollectionNamespacedClusterRoleTemplateBinding", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" + }, + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" } }, - "x-kubernetes-action": "deletecollection", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ClusterRoleTemplateBinding", - "version": "v3" + "401": { + "description": "Unauthorized" } }, + "x-kubernetes-action": "delete", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRoleBinding", + "version": "v3" + } + }, + "patch": { + "description": "partially update the specified GlobalRoleBinding", + "consumes": [ + "application/json-patch+json", + "application/merge-patch+json", + "application/apply-patch+yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "patchManagementCattleIoV3GlobalRoleBinding", "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" + } + }, + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" + }, { "uniqueItems": true, "type": "string", - "description": "object name and auth scope, such as for teams and projects", - "name": "namespace", - "in": "path", - "required": true + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", + "name": "fieldManager", + "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", + "name": "force", "in": "query" } - ] - }, - "/apis/management.cattle.io/v3/namespaces/{namespace}/clusterroletemplatebindings/{name}": { - "get": { - "description": "read the specified ClusterRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "readManagementCattleIoV3NamespacedClusterRoleTemplateBinding", - "parameters": [ - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" - } - }, - "401": { - "description": "Unauthorized" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" } }, - "x-kubernetes-action": "get", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ClusterRoleTemplateBinding", - "version": "v3" + "401": { + "description": "Unauthorized" } }, - "put": { - "description": "replace the specified ClusterRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "replaceManagementCattleIoV3NamespacedClusterRoleTemplateBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "put", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ClusterRoleTemplateBinding", - "version": "v3" - } - }, - "delete": { - "description": "delete a ClusterRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3NamespacedClusterRoleTemplateBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", - "name": "gracePeriodSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", - "name": "orphanDependents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", - "name": "propagationPolicy", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "delete", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ClusterRoleTemplateBinding", - "version": "v3" - } - }, - "patch": { - "description": "partially update the specified ClusterRoleTemplateBinding", - "consumes": [ - "application/json-patch+json", - "application/merge-patch+json", - "application/apply-patch+yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "patchManagementCattleIoV3NamespacedClusterRoleTemplateBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", - "name": "force", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "patch", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ClusterRoleTemplateBinding", - "version": "v3" - } + "x-kubernetes-action": "patch", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRoleBinding", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "name of the GlobalRoleBinding", + "name": "name", + "in": "path", + "required": true }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/globalroles": { + "get": { + "description": "list objects of kind GlobalRole", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "listManagementCattleIoV3GlobalRole", "parameters": [ { "uniqueItems": true, - "type": "string", - "description": "name of the ClusterRoleTemplateBinding", - "name": "name", - "in": "path", - "required": true + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "object name and auth scope, such as for teams and projects", - "name": "namespace", - "in": "path", - "required": true + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", "in": "query" - } - ] - }, - "/apis/management.cattle.io/v3/namespaces/{namespace}/projectroletemplatebindings": { - "get": { - "description": "list objects of kind ProjectRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "listManagementCattleIoV3NamespacedProjectRoleTemplateBinding", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBindingList" - } - }, - "401": { - "description": "Unauthorized" - } }, - "x-kubernetes-action": "list", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ProjectRoleTemplateBinding", - "version": "v3" - } - }, - "post": { - "description": "create a ProjectRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "createManagementCattleIoV3NamespacedProjectRoleTemplateBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" - } - }, - "401": { - "description": "Unauthorized" - } + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" }, - "x-kubernetes-action": "post", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ProjectRoleTemplateBinding", - "version": "v3" - } - }, - "delete": { - "description": "delete collection of ProjectRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3CollectionNamespacedProjectRoleTemplateBinding", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" - } + { + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" }, - "x-kubernetes-action": "deletecollection", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ProjectRoleTemplateBinding", - "version": "v3" - } - }, - "parameters": [ { "uniqueItems": true, "type": "string", - "description": "object name and auth scope, such as for teams and projects", - "name": "namespace", - "in": "path", - "required": true + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", "in": "query" - } - ] - }, - "/apis/management.cattle.io/v3/namespaces/{namespace}/projectroletemplatebindings/{name}": { - "get": { - "description": "read the specified ProjectRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "readManagementCattleIoV3NamespacedProjectRoleTemplateBinding", - "parameters": [ - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" - } - }, - "401": { - "description": "Unauthorized" - } }, - "x-kubernetes-action": "get", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ProjectRoleTemplateBinding", - "version": "v3" - } - }, - "put": { - "description": "replace the specified ProjectRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "replaceManagementCattleIoV3NamespacedProjectRoleTemplateBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" - } - }, - "401": { - "description": "Unauthorized" - } + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" }, - "x-kubernetes-action": "put", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ProjectRoleTemplateBinding", - "version": "v3" - } - }, - "delete": { - "description": "delete a ProjectRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3NamespacedProjectRoleTemplateBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", - "name": "gracePeriodSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", - "name": "orphanDependents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", - "name": "propagationPolicy", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" - } + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" }, - "x-kubernetes-action": "delete", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ProjectRoleTemplateBinding", - "version": "v3" + { + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" } - }, - "patch": { - "description": "partially update the specified ProjectRoleTemplateBinding", - "consumes": [ - "application/json-patch+json", - "application/merge-patch+json", - "application/apply-patch+yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "patchManagementCattleIoV3NamespacedProjectRoleTemplateBinding", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", - "name": "force", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" - } - }, - "401": { - "description": "Unauthorized" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleList" } }, - "x-kubernetes-action": "patch", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ProjectRoleTemplateBinding", - "version": "v3" + "401": { + "description": "Unauthorized" } }, + "x-kubernetes-action": "list", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRole", + "version": "v3" + } + }, + "post": { + "description": "create a GlobalRole", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "createManagementCattleIoV3GlobalRole", "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" + } + }, { "uniqueItems": true, "type": "string", - "description": "name of the ProjectRoleTemplateBinding", - "name": "name", - "in": "path", - "required": true + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "object name and auth scope, such as for teams and projects", - "name": "namespace", - "in": "path", - "required": true + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", + "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", "in": "query" } - ] - }, - "/apis/management.cattle.io/v3/namespaces/{namespace}/projects": { - "get": { - "description": "list objects of kind Project", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "listManagementCattleIoV3NamespacedProject", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectList" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "list", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "Project", - "version": "v3" - } - }, - "post": { - "description": "create a Project", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "createManagementCattleIoV3NamespacedProject", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.Project" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.Project" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.Project" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.Project" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "post", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "Project", - "version": "v3" - } - }, - "delete": { - "description": "delete collection of Project", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3CollectionNamespacedProject", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "deletecollection", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "Project", - "version": "v3" - } - }, - "parameters": [ - { - "uniqueItems": true, - "type": "string", - "description": "object name and auth scope, such as for teams and projects", - "name": "namespace", - "in": "path", - "required": true - }, - { - "uniqueItems": true, - "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", - "in": "query" - } - ] - }, - "/apis/management.cattle.io/v3/namespaces/{namespace}/projects/{name}": { - "get": { - "description": "read the specified Project", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "readManagementCattleIoV3NamespacedProject", - "parameters": [ - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.Project" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "get", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "Project", - "version": "v3" - } - }, - "put": { - "description": "replace the specified Project", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "replaceManagementCattleIoV3NamespacedProject", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.Project" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.Project" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.Project" - } - }, - "401": { - "description": "Unauthorized" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" } }, - "x-kubernetes-action": "put", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "Project", - "version": "v3" - } - }, - "delete": { - "description": "delete a Project", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3NamespacedProject", - "parameters": [ - { - "name": "body", - "in": "body", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", - "name": "gracePeriodSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", - "name": "orphanDependents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", - "name": "propagationPolicy", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" + "201": { + "description": "Created", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" } }, - "x-kubernetes-action": "delete", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "Project", - "version": "v3" - } - }, - "patch": { - "description": "partially update the specified Project", - "consumes": [ - "application/json-patch+json", - "application/merge-patch+json", - "application/apply-patch+yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "patchManagementCattleIoV3NamespacedProject", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", - "name": "force", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.Project" - } - }, - "401": { - "description": "Unauthorized" + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" } }, - "x-kubernetes-action": "patch", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "Project", - "version": "v3" + "401": { + "description": "Unauthorized" } }, - "parameters": [ - { - "uniqueItems": true, - "type": "string", - "description": "name of the Project", - "name": "name", - "in": "path", - "required": true - }, - { - "uniqueItems": true, - "type": "string", - "description": "object name and auth scope, such as for teams and projects", - "name": "namespace", - "in": "path", - "required": true - }, - { - "uniqueItems": true, - "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", - "in": "query" - } - ] + "x-kubernetes-action": "post", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRole", + "version": "v3" + } }, - "/apis/management.cattle.io/v3/projectroletemplatebindings": { - "get": { - "description": "list objects of kind ProjectRoleTemplateBinding", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "listManagementCattleIoV3ProjectRoleTemplateBindingForAllNamespaces", - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBindingList" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "list", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "ProjectRoleTemplateBinding", - "version": "v3" - } - }, + "delete": { + "description": "delete collection of GlobalRole", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3CollectionGlobalRole", "parameters": [ { "uniqueItems": true, @@ -3191,13 +950,6 @@ "name": "limit", "in": "query" }, - { - "uniqueItems": true, - "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", - "in": "query" - }, { "uniqueItems": true, "type": "string", @@ -3233,2301 +985,4549 @@ "name": "watch", "in": "query" } - ] + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } + }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "deletecollection", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRole", + "version": "v3" + } }, - "/apis/management.cattle.io/v3/projects": { - "get": { - "description": "list objects of kind Project", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "listManagementCattleIoV3ProjectForAllNamespaces", - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.ProjectList" - } - }, - "401": { - "description": "Unauthorized" + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/globalroles/{name}": { + "get": { + "description": "read the specified GlobalRole", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "readManagementCattleIoV3GlobalRole", + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" } }, - "x-kubernetes-action": "list", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "Project", - "version": "v3" + "401": { + "description": "Unauthorized" } }, + "x-kubernetes-action": "get", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRole", + "version": "v3" + } + }, + "put": { + "description": "replace the specified GlobalRole", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "replaceManagementCattleIoV3GlobalRole", "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" + } + }, { "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" + } + }, + "201": { + "description": "Created", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" + } + }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "put", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRole", + "version": "v3" + } + }, + "delete": { + "description": "delete a GlobalRole", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3GlobalRole", + "parameters": [ + { + "name": "body", + "in": "body", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" + } }, { "uniqueItems": true, "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", "in": "query" }, { "uniqueItems": true, "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", + "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", + "name": "gracePeriodSeconds", "in": "query" }, { "uniqueItems": true, - "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "type": "boolean", + "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", + "name": "orphanDependents", "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", + "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", + "name": "propagationPolicy", "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } + }, + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } + }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "delete", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRole", + "version": "v3" + } + }, + "patch": { + "description": "partially update the specified GlobalRole", + "consumes": [ + "application/json-patch+json", + "application/merge-patch+json", + "application/apply-patch+yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "patchManagementCattleIoV3GlobalRole", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" + } }, { "uniqueItems": true, "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", "in": "query" }, { "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", + "name": "fieldManager", "in": "query" }, { "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", "in": "query" }, { "uniqueItems": true, "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", + "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", + "name": "force", "in": "query" } - ] - }, - "/apis/management.cattle.io/v3/roletemplates": { - "get": { - "description": "list objects of kind RoleTemplate", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "listManagementCattleIoV3RoleTemplate", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplateList" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "list", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "RoleTemplate", - "version": "v3" - } - }, - "post": { - "description": "create a RoleTemplate", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "createManagementCattleIoV3RoleTemplate", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" - } - }, - "401": { - "description": "Unauthorized" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" } }, - "x-kubernetes-action": "post", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "RoleTemplate", - "version": "v3" + "401": { + "description": "Unauthorized" } }, - "delete": { - "description": "delete collection of RoleTemplate", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3CollectionRoleTemplate", - "parameters": [ - { - "uniqueItems": true, - "type": "boolean", - "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", - "name": "allowWatchBookmarks", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", - "name": "continue", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", - "name": "fieldSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", - "name": "labelSelector", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", - "name": "limit", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersionMatch", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", - "name": "sendInitialEvents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", - "name": "timeoutSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", - "name": "watch", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" - } - }, - "x-kubernetes-action": "deletecollection", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "RoleTemplate", - "version": "v3" - } + "x-kubernetes-action": "patch", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "GlobalRole", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "name of the GlobalRole", + "name": "name", + "in": "path", + "required": true }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/namespaces/{namespace}/clusterroletemplatebindings": { + "get": { + "description": "list objects of kind ClusterRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "listManagementCattleIoV3NamespacedClusterRoleTemplateBinding", "parameters": [ + { + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" + }, { "uniqueItems": true, "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", "in": "query" - } - ] - }, - "/apis/management.cattle.io/v3/roletemplates/{name}": { - "get": { - "description": "read the specified RoleTemplate", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "readManagementCattleIoV3RoleTemplate", - "parameters": [ - { - "uniqueItems": true, - "type": "string", - "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", - "name": "resourceVersion", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" - } - }, - "401": { - "description": "Unauthorized" - } }, - "x-kubernetes-action": "get", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "RoleTemplate", - "version": "v3" - } - }, - "put": { - "description": "replace the specified RoleTemplate", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "replaceManagementCattleIoV3RoleTemplate", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" - } - }, - "201": { - "description": "Created", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" - } - }, - "401": { - "description": "Unauthorized" - } + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" }, - "x-kubernetes-action": "put", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "RoleTemplate", - "version": "v3" - } - }, - "delete": { - "description": "delete a RoleTemplate", - "consumes": [ - "application/json", - "application/yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "deleteManagementCattleIoV3RoleTemplate", - "parameters": [ - { - "name": "body", - "in": "body", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "integer", - "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", - "name": "gracePeriodSeconds", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", - "name": "orphanDependents", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", - "name": "propagationPolicy", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "202": { - "description": "Accepted", - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" - } - }, - "401": { - "description": "Unauthorized" - } + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" }, - "x-kubernetes-action": "delete", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "RoleTemplate", - "version": "v3" - } - }, - "patch": { - "description": "partially update the specified RoleTemplate", - "consumes": [ - "application/json-patch+json", - "application/merge-patch+json", - "application/apply-patch+yaml" - ], - "produces": [ - "application/json", - "application/yaml" - ], - "schemes": [ - "https" - ], - "tags": [ - "managementCattleIo_v3" - ], - "operationId": "patchManagementCattleIoV3RoleTemplate", - "parameters": [ - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" - } - }, - { - "uniqueItems": true, - "type": "string", - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "name": "dryRun", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", - "name": "fieldManager", - "in": "query" - }, - { - "uniqueItems": true, - "type": "string", - "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", - "name": "fieldValidation", - "in": "query" - }, - { - "uniqueItems": true, - "type": "boolean", - "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", - "name": "force", - "in": "query" - } - ], - "responses": { - "200": { - "description": "OK", - "schema": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" - } - }, - "401": { - "description": "Unauthorized" - } + { + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" }, - "x-kubernetes-action": "patch", - "x-kubernetes-group-version-kind": { - "group": "management.cattle.io", - "kind": "RoleTemplate", - "version": "v3" - } - }, - "parameters": [ { "uniqueItems": true, "type": "string", - "description": "name of the RoleTemplate", - "name": "name", - "in": "path", - "required": true + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" }, { "uniqueItems": true, "type": "string", - "description": "If 'true', then the output is pretty printed.", - "name": "pretty", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", "in": "query" - } - ] - } - }, - "definitions": { - "io.cattle.management.v3.ClusterRoleTemplateBinding": { - "description": "ClusterRoleTemplateBinding is the object representing membership of a subject in a cluster with permissions specified by a given role template.", - "type": "object", - "required": [ - "clusterName", - "roleTemplateName" - ], - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" - }, - "clusterName": { - "description": "ClusterName is the name of the cluster to which a subject is added. Immutable.", - "type": "string" }, - "groupName": { - "description": "GroupName is the name of the group subject added to the cluster. Immutable.", - "type": "string" + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" }, - "groupPrincipalName": { - "description": "GroupPrincipalName is the name of the group principal subject added to the cluster. Immutable.", - "type": "string" + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + { + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBindingList" + } }, - "metadata": { - "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "list", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ClusterRoleTemplateBinding", + "version": "v3" + } + }, + "post": { + "description": "create a ClusterRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "createManagementCattleIoV3NamespacedClusterRoleTemplateBinding", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" + } }, - "roleTemplateName": { - "description": "RoleTemplateName is the name of the role template that defines permissions to perform actions on resources in the cluster. Immutable.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, - "userName": { - "description": "UserName is the name of the user subject added to the cluster. Immutable.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", + "in": "query" }, - "userPrincipalName": { - "description": "UserPrincipalName is the name of the user principal subject added to the cluster. Immutable.", - "type": "string" - } - }, - "x-kubernetes-group-version-kind": [ { - "group": "management.cattle.io", - "kind": "ClusterRoleTemplateBinding", - "version": "v3" + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" } - ] - }, - "io.cattle.management.v3.ClusterRoleTemplateBindingList": { - "description": "ClusterRoleTemplateBindingList is a list of ClusterRoleTemplateBinding", - "type": "object", - "required": [ - "items" - ], - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" + } }, - "items": { - "description": "List of clusterroletemplatebindings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", - "type": "array", - "items": { + "201": { + "description": "Created", + "schema": { "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" } }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" + } }, - "metadata": { - "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" + "401": { + "description": "Unauthorized" } }, - "x-kubernetes-group-version-kind": [ - { - "group": "management.cattle.io", - "kind": "ClusterRoleTemplateBindingList", - "version": "v3" - } - ] + "x-kubernetes-action": "post", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ClusterRoleTemplateBinding", + "version": "v3" + } }, - "io.cattle.management.v3.GlobalRole": { - "description": "GlobalRole defines rules that can be applied to the local cluster and or every downstream cluster.", - "type": "object", - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" - }, - "builtin": { - "description": "Builtin specifies that this GlobalRole was created by Rancher if true. Immutable.", - "type": "boolean" - }, - "description": { - "description": "Description holds text that describes the resource.", - "type": "string" + "delete": { + "description": "delete collection of ClusterRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3CollectionNamespacedClusterRoleTemplateBinding", + "parameters": [ + { + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" }, - "displayName": { - "description": "DisplayName is the human-readable name displayed in the UI for this resource.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" }, - "inheritedClusterRoles": { - "description": "InheritedClusterRoles are the names of RoleTemplates whose permissions are granted by this GlobalRole in every cluster besides the local cluster. To grant permissions in the local cluster, use the Rules or NamespacedRules fields.", - "type": "array", - "items": { - "type": "string" - } + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" }, - "metadata": { - "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + { + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" }, - "newUserDefault": { - "description": "NewUserDefault specifies that all new users created should be bound to this GlobalRole if true.", - "type": "boolean" + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" }, - "rules": { - "description": "Rules holds a list of PolicyRules that are applied to the local cluster only.", - "type": "array", - "items": { - "description": "PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to.", - "type": "object", - "required": [ - "verbs" - ], - "properties": { - "apiGroups": { - "description": "APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. \"\" represents the core API group and \"*\" represents all API groups.", - "type": "array", - "items": { - "type": "string" - } - }, - "nonResourceURLs": { - "description": "NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as \"pods\" or \"secrets\") or non-resource URL paths (such as \"/api\"), but not both.", - "type": "array", - "items": { - "type": "string" - } - }, - "resourceNames": { - "description": "ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed.", - "type": "array", - "items": { - "type": "string" - } - }, - "resources": { - "description": "Resources is a list of resources this rule applies to. '*' represents all resources.", - "type": "array", - "items": { - "type": "string" - } - }, - "verbs": { - "description": "Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs.", - "type": "array", - "items": { - "type": "string" - } - } - } + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" } + }, + "401": { + "description": "Unauthorized" } }, - "x-kubernetes-group-version-kind": [ + "x-kubernetes-action": "deletecollection", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ClusterRoleTemplateBinding", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "object name and auth scope, such as for teams and projects", + "name": "namespace", + "in": "path", + "required": true + }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/namespaces/{namespace}/clusterroletemplatebindings/{name}": { + "get": { + "description": "read the specified ClusterRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "readManagementCattleIoV3NamespacedClusterRoleTemplateBinding", + "parameters": [ { - "group": "management.cattle.io", - "kind": "GlobalRole", - "version": "v3" + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" + } + }, + "401": { + "description": "Unauthorized" } - ] + }, + "x-kubernetes-action": "get", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ClusterRoleTemplateBinding", + "version": "v3" + } }, - "io.cattle.management.v3.GlobalRoleBinding": { - "description": "GlobalRoleBinding binds a given subject user or group to a GlobalRole.", - "type": "object", - "required": [ - "globalRoleName" - ], - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" + "put": { + "description": "replace the specified ClusterRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "replaceManagementCattleIoV3NamespacedClusterRoleTemplateBinding", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" + } }, - "globalRoleName": { - "description": "GlobalRoleName is the name of the Global Role that the subject will be bound to. Immutable.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, - "groupPrincipalName": { - "description": "GroupPrincipalName is the name of the group principal subject to be bound. Immutable.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", + "in": "query" }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" + } }, - "metadata": { - "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + "201": { + "description": "Created", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" + } }, - "userName": { - "description": "UserName is the name of the user subject to be bound. Immutable.", - "type": "string" + "401": { + "description": "Unauthorized" } }, - "x-kubernetes-group-version-kind": [ - { - "group": "management.cattle.io", - "kind": "GlobalRoleBinding", - "version": "v3" - } - ] + "x-kubernetes-action": "put", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ClusterRoleTemplateBinding", + "version": "v3" + } }, - "io.cattle.management.v3.GlobalRoleBindingList": { - "description": "GlobalRoleBindingList is a list of GlobalRoleBinding", - "type": "object", - "required": [ - "items" - ], - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" - }, - "items": { - "description": "List of globalrolebindings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", - "type": "array", - "items": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" + "delete": { + "description": "delete a ClusterRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3NamespacedClusterRoleTemplateBinding", + "parameters": [ + { + "name": "body", + "in": "body", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" } }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, - "metadata": { - "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" - } - }, - "x-kubernetes-group-version-kind": [ { - "group": "management.cattle.io", - "kind": "GlobalRoleBindingList", - "version": "v3" - } - ] - }, - "io.cattle.management.v3.GlobalRoleList": { - "description": "GlobalRoleList is a list of GlobalRole", - "type": "object", - "required": [ - "items" - ], - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" + "uniqueItems": true, + "type": "integer", + "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", + "name": "gracePeriodSeconds", + "in": "query" }, - "items": { - "description": "List of globalroles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", - "type": "array", - "items": { - "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" + { + "uniqueItems": true, + "type": "boolean", + "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", + "name": "orphanDependents", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", + "name": "propagationPolicy", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" } }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } }, - "metadata": { - "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" + "401": { + "description": "Unauthorized" } }, - "x-kubernetes-group-version-kind": [ + "x-kubernetes-action": "delete", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ClusterRoleTemplateBinding", + "version": "v3" + } + }, + "patch": { + "description": "partially update the specified ClusterRoleTemplateBinding", + "consumes": [ + "application/json-patch+json", + "application/merge-patch+json", + "application/apply-patch+yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "patchManagementCattleIoV3NamespacedClusterRoleTemplateBinding", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" + } + }, + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", + "name": "fieldManager", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" + }, { - "group": "management.cattle.io", - "kind": "GlobalRoleList", - "version": "v3" + "uniqueItems": true, + "type": "boolean", + "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", + "name": "force", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" + } + }, + "401": { + "description": "Unauthorized" } - ] + }, + "x-kubernetes-action": "patch", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ClusterRoleTemplateBinding", + "version": "v3" + } }, - "io.cattle.management.v3.Project": { - "description": "Project is a group of namespaces. Projects are used to create a multi-tenant environment within a Kubernetes cluster by managing namespace operations, such as role assignments or quotas, as a group.", - "type": "object", - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "name of the ClusterRoleTemplateBinding", + "name": "name", + "in": "path", + "required": true + }, + { + "uniqueItems": true, + "type": "string", + "description": "object name and auth scope, such as for teams and projects", + "name": "namespace", + "in": "path", + "required": true + }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/namespaces/{namespace}/projectroletemplatebindings": { + "get": { + "description": "list objects of kind ProjectRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "listManagementCattleIoV3NamespacedProjectRoleTemplateBinding", + "parameters": [ + { + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" }, - "metadata": { - "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" }, - "spec": { - "description": "Spec is the specification of the desired configuration for the project.", - "type": "object", - "required": [ - "clusterName", - "displayName" - ], - "properties": { - "clusterName": { - "description": "ClusterName is the name of the cluster the project belongs to.", - "type": "string" - }, - "containerDefaultResourceLimit": { - "description": "ContainerDefaultResourceLimit is a specification for the default LimitRange for the namespace. See https://kubernetes.io/docs/concepts/policy/limit-range/ for more details.", - "type": "object", - "properties": { - "limitsCpu": { - "description": "LimitsCPU is the CPU limits across all pods in a non-terminal state.", - "type": "string" - }, - "limitsMemory": { - "description": "LimitsMemory is the memory limits across all pods in a non-terminal state.", - "type": "string" - }, - "requestsCpu": { - "description": "RequestsCPU is the CPU requests limit across all pods in a non-terminal state.", - "type": "string" - }, - "requestsMemory": { - "description": "RequestsMemory is the memory requests limit across all pods in a non-terminal state.", - "type": "string" - } - } - }, - "description": { - "description": "Description is a human-readable description of the project.", - "type": "string" - }, - "displayName": { - "description": "DisplayName is the human-readable name for the project.", - "type": "string" - }, - "enableProjectMonitoring": { - "description": "EnableProjectMonitoring indicates whether Monitoring V1 should be enabled for this project. Deprecated. Use the Monitoring V2 app instead. Defaults to false.", - "type": "boolean" - }, - "namespaceDefaultResourceQuota": { - "description": "NamespaceDefaultResourceQuota is a specification of the default ResourceQuota that a namespace will receive if none is provided. Must provide ResourceQuota if NamespaceDefaultResourceQuota is specified. See https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more details.", - "type": "object", - "properties": { - "limit": { - "description": "Limit is the default quota limits applied to new namespaces.", - "type": "object", - "properties": { - "configMaps": { - "description": "ConfigMaps is the total number of ReplicationControllers that can exist in the namespace.", - "type": "string" - }, - "limitsCpu": { - "description": "LimitsCPU is the CPU limits across all pods in a non-terminal state.", - "type": "string" - }, - "limitsMemory": { - "description": "LimitsMemory is the memory limits across all pods in a non-terminal state.", - "type": "string" - }, - "persistentVolumeClaims": { - "description": "PersistentVolumeClaims is the total number of PersistentVolumeClaims that can exist in the namespace.", - "type": "string" - }, - "pods": { - "description": "Pods is the total number of Pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if .status.phase in (Failed, Succeeded) is true.", - "type": "string" - }, - "replicationControllers": { - "description": "ReplicationControllers is total number of ReplicationControllers that can exist in the namespace.", - "type": "string" - }, - "requestsCpu": { - "description": "RequestsCPU is the CPU requests limit across all pods in a non-terminal state.", - "type": "string" - }, - "requestsMemory": { - "description": "RequestsMemory is the memory requests limit across all pods in a non-terminal state.", - "type": "string" - }, - "requestsStorage": { - "description": "RequestsStorage is the storage requests limit across all persistent volume claims.", - "type": "string" - }, - "secrets": { - "description": "Secrets is the total number of ReplicationControllers that can exist in the namespace.", - "type": "string" - }, - "services": { - "description": "Services is the total number of Services that can exist in the namespace.", - "type": "string" - }, - "servicesLoadBalancers": { - "description": "ServicesLoadBalancers is the total number of Services of type LoadBalancer that can exist in the namespace.", - "type": "string" - }, - "servicesNodePorts": { - "description": "ServiceNodePorts is the total number of Services of type NodePort that can exist in the namespace.", - "type": "string" - } - } - } - } - }, - "resourceQuota": { - "description": "ResourceQuota is a specification for the total amount of quota for standard resources that will be shared by all namespaces in the project. Must provide NamespaceDefaultResourceQuota if ResourceQuota is specified. See https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more details.", - "type": "object", - "properties": { - "limit": { - "description": "Limit is the total allowable quota limits shared by all namespaces in the project.", - "type": "object", - "properties": { - "configMaps": { - "description": "ConfigMaps is the total number of ReplicationControllers that can exist in the namespace.", - "type": "string" - }, - "limitsCpu": { - "description": "LimitsCPU is the CPU limits across all pods in a non-terminal state.", - "type": "string" - }, - "limitsMemory": { - "description": "LimitsMemory is the memory limits across all pods in a non-terminal state.", - "type": "string" - }, - "persistentVolumeClaims": { - "description": "PersistentVolumeClaims is the total number of PersistentVolumeClaims that can exist in the namespace.", - "type": "string" - }, - "pods": { - "description": "Pods is the total number of Pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if .status.phase in (Failed, Succeeded) is true.", - "type": "string" - }, - "replicationControllers": { - "description": "ReplicationControllers is total number of ReplicationControllers that can exist in the namespace.", - "type": "string" - }, - "requestsCpu": { - "description": "RequestsCPU is the CPU requests limit across all pods in a non-terminal state.", - "type": "string" - }, - "requestsMemory": { - "description": "RequestsMemory is the memory requests limit across all pods in a non-terminal state.", - "type": "string" - }, - "requestsStorage": { - "description": "RequestsStorage is the storage requests limit across all persistent volume claims.", - "type": "string" - }, - "secrets": { - "description": "Secrets is the total number of ReplicationControllers that can exist in the namespace.", - "type": "string" - }, - "services": { - "description": "Services is the total number of Services that can exist in the namespace.", - "type": "string" - }, - "servicesLoadBalancers": { - "description": "ServicesLoadBalancers is the total number of Services of type LoadBalancer that can exist in the namespace.", - "type": "string" - }, - "servicesNodePorts": { - "description": "ServiceNodePorts is the total number of Services of type NodePort that can exist in the namespace.", - "type": "string" - } - } - }, - "usedLimit": { - "description": "UsedLimit is the currently allocated quota for all namespaces in the project.", - "type": "object", - "properties": { - "configMaps": { - "description": "ConfigMaps is the total number of ReplicationControllers that can exist in the namespace.", - "type": "string" - }, - "limitsCpu": { - "description": "LimitsCPU is the CPU limits across all pods in a non-terminal state.", - "type": "string" - }, - "limitsMemory": { - "description": "LimitsMemory is the memory limits across all pods in a non-terminal state.", - "type": "string" - }, - "persistentVolumeClaims": { - "description": "PersistentVolumeClaims is the total number of PersistentVolumeClaims that can exist in the namespace.", - "type": "string" - }, - "pods": { - "description": "Pods is the total number of Pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if .status.phase in (Failed, Succeeded) is true.", - "type": "string" - }, - "replicationControllers": { - "description": "ReplicationControllers is total number of ReplicationControllers that can exist in the namespace.", - "type": "string" - }, - "requestsCpu": { - "description": "RequestsCPU is the CPU requests limit across all pods in a non-terminal state.", - "type": "string" - }, - "requestsMemory": { - "description": "RequestsMemory is the memory requests limit across all pods in a non-terminal state.", - "type": "string" - }, - "requestsStorage": { - "description": "RequestsStorage is the storage requests limit across all persistent volume claims.", - "type": "string" - }, - "secrets": { - "description": "Secrets is the total number of ReplicationControllers that can exist in the namespace.", - "type": "string" - }, - "services": { - "description": "Services is the total number of Services that can exist in the namespace.", - "type": "string" - }, - "servicesLoadBalancers": { - "description": "ServicesLoadBalancers is the total number of Services of type LoadBalancer that can exist in the namespace.", - "type": "string" - }, - "servicesNodePorts": { - "description": "ServiceNodePorts is the total number of Services of type NodePort that can exist in the namespace.", - "type": "string" - } - } - } - } - } - } - }, - "status": { - "description": "Status is the most recently observed status of the project.", - "type": "object", - "properties": { - "conditions": { - "description": "Conditions are a set of indicators about aspects of the project.", - "type": "array", - "items": { - "description": "ProjectCondition is the status of an aspect of the project.", - "type": "object", - "required": [ - "status", - "type" - ], - "properties": { - "lastTransitionTime": { - "description": "Last time the condition transitioned from one status to another.", - "type": "string" - }, - "lastUpdateTime": { - "description": "The last time this condition was updated.", - "type": "string" - }, - "message": { - "description": "Human-readable message indicating details about last transition.", - "type": "string" - }, - "reason": { - "description": "The reason for the condition's last transition.", - "type": "string" - }, - "status": { - "description": "Status of the condition, one of True, False, Unknown.", - "type": "string" - }, - "type": { - "description": "Type of project condition.", - "type": "string" - } - } - } - }, - "monitoringStatus": { - "description": "MonitoringStatus is the status of the Monitoring V1 app.", - "type": "object", - "properties": { - "conditions": { - "type": "array", - "items": { - "type": "object", - "required": [ - "status", - "type" - ], - "properties": { - "lastTransitionTime": { - "description": "Last time the condition transitioned from one status to another.", - "type": "string" - }, - "lastUpdateTime": { - "description": "The last time this condition was updated.", - "type": "string" - }, - "message": { - "description": "Human-readable message indicating details about last transition", - "type": "string" - }, - "reason": { - "description": "The reason for the condition's last transition.", - "type": "string" - }, - "status": { - "description": "Status of the condition, one of True, False, Unknown.", - "type": "string" - }, - "type": { - "description": "Type of cluster condition.", - "type": "string" - } - } - } - }, - "grafanaEndpoint": { - "type": "string" - } - } - }, - "podSecurityPolicyTemplateId": { - "description": "PodSecurityPolicyTemplateName is the pod security policy template associated with the project.", - "type": "string" - } - } - } - }, - "x-kubernetes-group-version-kind": [ { - "group": "management.cattle.io", - "kind": "Project", - "version": "v3" - } - ] - }, - "io.cattle.management.v3.ProjectList": { - "description": "ProjectList is a list of Project", - "type": "object", - "required": [ - "items" - ], - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" - }, - "items": { - "description": "List of projects. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", - "type": "array", - "items": { - "$ref": "#/definitions/io.cattle.management.v3.Project" - } - }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" }, - "metadata": { - "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" - } - }, - "x-kubernetes-group-version-kind": [ { - "group": "management.cattle.io", - "kind": "ProjectList", - "version": "v3" - } - ] - }, - "io.cattle.management.v3.ProjectRoleTemplateBinding": { - "description": "ProjectRoleTemplateBinding is the object representing membership of a subject in a project with permissions specified by a given role template.", - "type": "object", - "required": [ - "projectName", - "roleTemplateName" - ], - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" }, - "groupName": { - "description": "GroupName is the name of the group subject added to the project. Immutable.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" }, - "groupPrincipalName": { - "description": "GroupPrincipalName is the name of the group principal subject added to the project. Immutable.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" }, - "metadata": { - "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" }, - "projectName": { - "description": "ProjectName is the name of the project to which a subject is added. Immutable.", - "type": "string" + { + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBindingList" + } }, - "roleTemplateName": { - "description": "RoleTemplateName is the name of the role template that defines permissions to perform actions on resources in the project. Immutable.", - "type": "string" + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "list", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ProjectRoleTemplateBinding", + "version": "v3" + } + }, + "post": { + "description": "create a ProjectRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "createManagementCattleIoV3NamespacedProjectRoleTemplateBinding", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" + } }, - "serviceAccount": { - "description": "ServiceAccount is the name of the service account bound as a subject. Immutable. Deprecated.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, - "userName": { - "description": "UserName is the name of the user subject added to the project. Immutable.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", + "in": "query" }, - "userPrincipalName": { - "description": "UserPrincipalName is the name of the user principal subject added to the project. Immutable.", - "type": "string" - } - }, - "x-kubernetes-group-version-kind": [ { - "group": "management.cattle.io", - "kind": "ProjectRoleTemplateBinding", - "version": "v3" + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" } - ] - }, - "io.cattle.management.v3.ProjectRoleTemplateBindingList": { - "description": "ProjectRoleTemplateBindingList is a list of ProjectRoleTemplateBinding", - "type": "object", - "required": [ - "items" - ], - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" + } }, - "items": { - "description": "List of projectroletemplatebindings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", - "type": "array", - "items": { + "201": { + "description": "Created", + "schema": { "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" } }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" + } }, - "metadata": { - "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" + "401": { + "description": "Unauthorized" } }, - "x-kubernetes-group-version-kind": [ - { - "group": "management.cattle.io", - "kind": "ProjectRoleTemplateBindingList", - "version": "v3" - } - ] + "x-kubernetes-action": "post", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ProjectRoleTemplateBinding", + "version": "v3" + } }, - "io.cattle.management.v3.RoleTemplate": { - "description": "RoleTemplate holds configuration for a template that is used to create kubernetes Roles and ClusterRoles (in the rbac.authorization.k8s.io group) for a cluster or project.", - "type": "object", - "properties": { - "administrative": { - "description": "Administrative if false, and context is set to cluster this RoleTemplate will not grant access to \"CatalogTemplates\" and \"CatalogTemplateVersions\" for any project in the cluster. Default is false.", - "type": "boolean" - }, - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" - }, - "builtin": { - "description": "Builtin if true specifies that this RoleTemplate was created by Rancher and is immutable. Default to false.", - "type": "boolean" - }, - "clusterCreatorDefault": { - "description": "ClusterCreatorDefault if true, a binding with this RoleTemplate will be created for a users when they create a new cluster. ClusterCreatorDefault is only evaluated if the context of the RoleTemplate is set to cluster. Default to false.", - "type": "boolean" + "delete": { + "description": "delete collection of ProjectRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3CollectionNamespacedProjectRoleTemplateBinding", + "parameters": [ + { + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" }, - "context": { - "description": "Context describes if the roleTemplate applies to clusters or projects. Valid values are \"project\", \"cluster\" or \"\".", + { + "uniqueItems": true, "type": "string", - "enum": [ - "project", - "cluster", - "" - ] - }, - "description": { - "description": "Description holds text that describes the resource.", - "type": "string" - }, - "displayName": { - "description": "DisplayName is the human-readable name displayed in the UI for this resource.", - "type": "string" + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" }, - "external": { - "description": "External if true specifies that rules for this RoleTemplate should be gathered from a ClusterRole with the matching name. If set to true the Rules on the template will not be evaluated. External's value is only evaluated if the RoleTemplate's context is set to \"cluster\" Default to false.", - "type": "boolean" + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" }, - "hidden": { - "description": "Hidden if true informs the Rancher UI not to display this RoleTemplate. Default to false.", - "type": "boolean" + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" + { + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" }, - "locked": { - "description": "Locked if true, new bindings will not be able to use this RoleTemplate. Default to false.", - "type": "boolean" + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" }, - "metadata": { - "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" }, - "projectCreatorDefault": { - "description": "ProjectCreatorDefault if true, a binding with this RoleTemplate will be created for a user when they create a new project. ProjectCreatorDefault is only evaluated if the context of the RoleTemplate is set to project. Default to false.", - "type": "boolean" + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" }, - "roleTemplateNames": { - "description": "RoleTemplateNames list of RoleTemplate names that this RoleTemplate will inherit. This RoleTemplate will grant all rules defined in an inherited RoleTemplate. Inherited RoleTemplates must already exist.", - "type": "array", - "items": { - "type": "string" - } + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" }, - "rules": { - "description": "Rules hold all the PolicyRules for this RoleTemplate.", - "type": "array", - "items": { - "description": "PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to.", - "type": "object", - "required": [ - "verbs" - ], - "properties": { - "apiGroups": { - "description": "APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. \"\" represents the core API group and \"*\" represents all API groups.", - "type": "array", - "items": { - "type": "string" - } - }, - "nonResourceURLs": { - "description": "NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as \"pods\" or \"secrets\") or non-resource URL paths (such as \"/api\"), but not both.", - "type": "array", - "items": { - "type": "string" - } - }, - "resourceNames": { - "description": "ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed.", - "type": "array", - "items": { - "type": "string" - } - }, - "resources": { - "description": "Resources is a list of resources this rule applies to. '*' represents all resources.", - "type": "array", - "items": { - "type": "string" - } - }, - "verbs": { - "description": "Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs.", - "type": "array", - "items": { - "type": "string" - } - } - } - } - } - }, - "x-kubernetes-group-version-kind": [ { - "group": "management.cattle.io", - "kind": "RoleTemplate", - "version": "v3" + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" } - ] - }, - "io.cattle.management.v3.RoleTemplateList": { - "description": "RoleTemplateList is a list of RoleTemplate", - "type": "object", - "required": [ - "items" - ], - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" - }, - "items": { - "description": "List of roletemplates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", - "type": "array", - "items": { - "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" } }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" - }, - "metadata": { - "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" + "401": { + "description": "Unauthorized" } }, - "x-kubernetes-group-version-kind": [ + "x-kubernetes-action": "deletecollection", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ProjectRoleTemplateBinding", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "object name and auth scope, such as for teams and projects", + "name": "namespace", + "in": "path", + "required": true + }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/namespaces/{namespace}/projectroletemplatebindings/{name}": { + "get": { + "description": "read the specified ProjectRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "readManagementCattleIoV3NamespacedProjectRoleTemplateBinding", + "parameters": [ { - "group": "management.cattle.io", - "kind": "RoleTemplateList", - "version": "v3" + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" } - ] - }, - "io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions": { - "description": "DeleteOptions may be provided when deleting an API object.", - "type": "object", - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" - }, - "dryRun": { - "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", - "type": "array", - "items": { - "type": "string" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" } }, - "gracePeriodSeconds": { - "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", - "type": "integer", - "format": "int64" - }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" - }, - "orphanDependents": { - "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", - "type": "boolean" - }, - "preconditions": { - "description": "Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned.", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Preconditions" - }, - "propagationPolicy": { - "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", - "type": "string" + "401": { + "description": "Unauthorized" } }, - "x-kubernetes-group-version-kind": [ + "x-kubernetes-action": "get", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ProjectRoleTemplateBinding", + "version": "v3" + } + }, + "put": { + "description": "replace the specified ProjectRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "replaceManagementCattleIoV3NamespacedProjectRoleTemplateBinding", + "parameters": [ { - "group": "", - "kind": "DeleteOptions", - "version": "v1" + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" + } }, { - "group": "admission.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, { - "group": "admission.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", + "in": "query" }, { - "group": "admissionregistration.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" + } }, - { - "group": "admissionregistration.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "201": { + "description": "Created", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" + } }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "put", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ProjectRoleTemplateBinding", + "version": "v3" + } + }, + "delete": { + "description": "delete a ProjectRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3NamespacedProjectRoleTemplateBinding", + "parameters": [ { - "group": "admissionregistration.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "name": "body", + "in": "body", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" + } }, { - "group": "apiextensions.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, { - "group": "apiextensions.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "integer", + "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", + "name": "gracePeriodSeconds", + "in": "query" }, { - "group": "apiregistration.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "boolean", + "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", + "name": "orphanDependents", + "in": "query" }, { - "group": "apiregistration.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "string", + "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", + "name": "propagationPolicy", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } }, - { - "group": "apps", - "kind": "DeleteOptions", - "version": "v1" + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "delete", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ProjectRoleTemplateBinding", + "version": "v3" + } + }, + "patch": { + "description": "partially update the specified ProjectRoleTemplateBinding", + "consumes": [ + "application/json-patch+json", + "application/merge-patch+json", + "application/apply-patch+yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "patchManagementCattleIoV3NamespacedProjectRoleTemplateBinding", + "parameters": [ { - "group": "apps", - "kind": "DeleteOptions", - "version": "v1beta1" + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" + } }, { - "group": "apps", - "kind": "DeleteOptions", - "version": "v1beta2" + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, { - "group": "authentication.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", + "name": "fieldManager", + "in": "query" }, { - "group": "authentication.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" }, { - "group": "authentication.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "boolean", + "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", + "name": "force", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" + } }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "patch", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ProjectRoleTemplateBinding", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "name of the ProjectRoleTemplateBinding", + "name": "name", + "in": "path", + "required": true + }, + { + "uniqueItems": true, + "type": "string", + "description": "object name and auth scope, such as for teams and projects", + "name": "namespace", + "in": "path", + "required": true + }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/namespaces/{namespace}/projects": { + "get": { + "description": "list objects of kind Project", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "listManagementCattleIoV3NamespacedProject", + "parameters": [ { - "group": "authorization.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" }, { - "group": "authorization.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "string", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" }, { - "group": "autoscaling", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" }, { - "group": "autoscaling", - "kind": "DeleteOptions", - "version": "v2" + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" }, { - "group": "autoscaling", - "kind": "DeleteOptions", - "version": "v2beta1" + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" }, { - "group": "autoscaling", - "kind": "DeleteOptions", - "version": "v2beta2" + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" }, { - "group": "batch", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" }, { - "group": "batch", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" }, { - "group": "certificates.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" }, { - "group": "certificates.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectList" + } }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "list", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "Project", + "version": "v3" + } + }, + "post": { + "description": "create a Project", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "createManagementCattleIoV3NamespacedProject", + "parameters": [ { - "group": "certificates.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.Project" + } }, { - "group": "coordination.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, { - "group": "coordination.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", + "in": "query" }, { - "group": "discovery.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.Project" + } }, - { - "group": "discovery.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "201": { + "description": "Created", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.Project" + } }, - { - "group": "events.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.Project" + } }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "post", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "Project", + "version": "v3" + } + }, + "delete": { + "description": "delete collection of Project", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3CollectionNamespacedProject", + "parameters": [ { - "group": "events.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" }, { - "group": "extensions", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "string", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" }, { - "group": "flowcontrol.apiserver.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" }, { - "group": "flowcontrol.apiserver.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" }, { - "group": "flowcontrol.apiserver.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta2" + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" }, { - "group": "flowcontrol.apiserver.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta3" + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" }, { - "group": "imagepolicy.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" }, { - "group": "internal.apiserver.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" }, { - "group": "networking.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" }, { - "group": "networking.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "deletecollection", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "Project", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "object name and auth scope, such as for teams and projects", + "name": "namespace", + "in": "path", + "required": true + }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/namespaces/{namespace}/projects/{name}": { + "get": { + "description": "read the specified Project", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "readManagementCattleIoV3NamespacedProject", + "parameters": [ { - "group": "networking.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.Project" + } }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "get", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "Project", + "version": "v3" + } + }, + "put": { + "description": "replace the specified Project", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "replaceManagementCattleIoV3NamespacedProject", + "parameters": [ { - "group": "node.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.Project" + } }, { - "group": "node.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, { - "group": "node.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", + "in": "query" }, { - "group": "policy", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.Project" + } }, - { - "group": "policy", - "kind": "DeleteOptions", - "version": "v1beta1" + "201": { + "description": "Created", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.Project" + } }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "put", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "Project", + "version": "v3" + } + }, + "delete": { + "description": "delete a Project", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3NamespacedProject", + "parameters": [ { - "group": "rbac.authorization.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "name": "body", + "in": "body", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" + } }, { - "group": "rbac.authorization.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, { - "group": "rbac.authorization.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "integer", + "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", + "name": "gracePeriodSeconds", + "in": "query" }, { - "group": "resource.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha2" + "uniqueItems": true, + "type": "boolean", + "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", + "name": "orphanDependents", + "in": "query" }, { - "group": "scheduling.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", + "name": "propagationPolicy", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } + }, + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "delete", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "Project", + "version": "v3" + } + }, + "patch": { + "description": "partially update the specified Project", + "consumes": [ + "application/json-patch+json", + "application/merge-patch+json", + "application/apply-patch+yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "patchManagementCattleIoV3NamespacedProject", + "parameters": [ { - "group": "scheduling.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" + } }, { - "group": "scheduling.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" }, { - "group": "storage.k8s.io", - "kind": "DeleteOptions", - "version": "v1" + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", + "name": "fieldManager", + "in": "query" }, { - "group": "storage.k8s.io", - "kind": "DeleteOptions", - "version": "v1alpha1" + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" }, { - "group": "storage.k8s.io", - "kind": "DeleteOptions", - "version": "v1beta1" + "uniqueItems": true, + "type": "boolean", + "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", + "name": "force", + "in": "query" } - ] - }, - "io.k8s.apimachinery.pkg.apis.meta.v1.FieldsV1": { - "description": "FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format.\n\nEach key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f:\u003cname\u003e', where \u003cname\u003e is the name of a field in a struct, or key in a map 'v:\u003cvalue\u003e', where \u003cvalue\u003e is the exact json formatted value of a list item 'i:\u003cindex\u003e', where \u003cindex\u003e is position of a item in a list 'k:\u003ckeys\u003e', where \u003ckeys\u003e is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set.\n\nThe exact format is defined in sigs.k8s.io/structured-merge-diff", - "type": "object" - }, - "io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta": { - "description": "ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}.", - "type": "object", - "properties": { - "continue": { - "description": "continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the next set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message.", - "type": "string" - }, - "remainingItemCount": { - "description": "remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is *estimating* the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact.", - "type": "integer", - "format": "int64" - }, - "resourceVersion": { - "description": "String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency", - "type": "string" + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.Project" + } }, - "selfLink": { - "description": "Deprecated: selfLink is a legacy read-only field that is no longer populated by the system.", - "type": "string" + "401": { + "description": "Unauthorized" } + }, + "x-kubernetes-action": "patch", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "Project", + "version": "v3" } }, - "io.k8s.apimachinery.pkg.apis.meta.v1.ManagedFieldsEntry": { - "description": "ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to.", - "type": "object", - "properties": { - "apiVersion": { - "description": "APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted.", - "type": "string" + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "name of the Project", + "name": "name", + "in": "path", + "required": true + }, + { + "uniqueItems": true, + "type": "string", + "description": "object name and auth scope, such as for teams and projects", + "name": "namespace", + "in": "path", + "required": true + }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/projectroletemplatebindings": { + "get": { + "description": "list objects of kind ProjectRoleTemplateBinding", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "listManagementCattleIoV3ProjectRoleTemplateBindingForAllNamespaces", + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBindingList" + } }, - "fieldsType": { - "description": "FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\"", - "type": "string" + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "list", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "ProjectRoleTemplateBinding", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/projects": { + "get": { + "description": "list objects of kind Project", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "listManagementCattleIoV3ProjectForAllNamespaces", + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectList" + } }, - "fieldsV1": { - "description": "FieldsV1 holds the first JSON version format as described in the \"FieldsV1\" type.", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.FieldsV1" + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "list", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "Project", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/roletemplates": { + "get": { + "description": "list objects of kind RoleTemplate", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "listManagementCattleIoV3RoleTemplate", + "parameters": [ + { + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" }, - "manager": { - "description": "Manager is an identifier of the workflow managing these fields.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" }, - "operation": { - "description": "Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" }, - "subresource": { - "description": "Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource.", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplateList" + } }, - "time": { - "description": "Time is the timestamp of when the ManagedFields entry was added. The timestamp will also be updated if a field is added, the manager changes any of the owned fields value or removes a field. The timestamp does not update when a field is removed from the entry because another manager took it over.", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Time" + "401": { + "description": "Unauthorized" } + }, + "x-kubernetes-action": "list", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "RoleTemplate", + "version": "v3" } }, - "io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta": { - "description": "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.", - "type": "object", - "properties": { - "annotations": { - "description": "Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations", - "type": "object", - "additionalProperties": { - "type": "string" + "post": { + "description": "create a RoleTemplate", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "createManagementCattleIoV3RoleTemplate", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" } }, - "creationTimestamp": { - "description": "CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Time" + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", + "in": "query" }, - "deletionGracePeriodSeconds": { - "description": "Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only.", - "type": "integer", - "format": "int64" + { + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" + } }, - "deletionTimestamp": { - "description": "DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.\n\nPopulated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Time" + "201": { + "description": "Created", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" + } }, - "finalizers": { - "description": "Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list.", - "type": "array", - "items": { - "type": "string" - }, - "x-kubernetes-patch-strategy": "merge" + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" + } }, - "generateName": { - "description": "GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will return a 409.\n\nApplied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency", - "type": "string" + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "post", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "RoleTemplate", + "version": "v3" + } + }, + "delete": { + "description": "delete collection of RoleTemplate", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3CollectionRoleTemplate", + "parameters": [ + { + "uniqueItems": true, + "type": "boolean", + "description": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.", + "name": "allowWatchBookmarks", + "in": "query" }, - "generation": { - "description": "A sequence number representing a specific generation of the desired state. Populated by the system. Read-only.", - "type": "integer", - "format": "int64" + { + "uniqueItems": true, + "type": "string", + "description": "The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.", + "name": "continue", + "in": "query" }, - "labels": { - "description": "Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels", - "type": "object", - "additionalProperties": { - "type": "string" - } + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.", + "name": "fieldSelector", + "in": "query" }, - "managedFields": { - "description": "ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object.", - "type": "array", - "items": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ManagedFieldsEntry" - } + { + "uniqueItems": true, + "type": "string", + "description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.", + "name": "labelSelector", + "in": "query" }, - "name": { - "description": "Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names", - "type": "string" + { + "uniqueItems": true, + "type": "integer", + "description": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.", + "name": "limit", + "in": "query" }, - "namespace": { - "description": "Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n\nMust be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", - "type": "string" + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" }, - "ownerReferences": { - "description": "List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller.", - "type": "array", - "items": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.OwnerReference" - }, - "x-kubernetes-patch-merge-key": "uid", - "x-kubernetes-patch-strategy": "merge" + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersionMatch", + "in": "query" }, - "resourceVersion": { - "description": "An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency", - "type": "string" + { + "uniqueItems": true, + "type": "boolean", + "description": "`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n is interpreted as \"data at least as new as the provided `resourceVersion`\"\n and the bookmark event is send when the state is synced\n to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n bookmark event is send when the state is synced at least to the moment\n when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.", + "name": "sendInitialEvents", + "in": "query" }, - "selfLink": { - "description": "Deprecated: selfLink is a legacy read-only field that is no longer populated by the system.", - "type": "string" + { + "uniqueItems": true, + "type": "integer", + "description": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.", + "name": "timeoutSeconds", + "in": "query" }, - "uid": { - "description": "UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n\nPopulated by the system. Read-only. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids", - "type": "string" + { + "uniqueItems": true, + "type": "boolean", + "description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.", + "name": "watch", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } + }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "deletecollection", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "RoleTemplate", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + }, + "/apis/management.cattle.io/v3/roletemplates/{name}": { + "get": { + "description": "read the specified RoleTemplate", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "readManagementCattleIoV3RoleTemplate", + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.\n\nDefaults to unset", + "name": "resourceVersion", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" + } + }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "get", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "RoleTemplate", + "version": "v3" + } + }, + "put": { + "description": "replace the specified RoleTemplate", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "replaceManagementCattleIoV3RoleTemplate", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" + } + }, + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.", + "name": "fieldManager", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" + } + }, + "201": { + "description": "Created", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" + } + }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "put", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "RoleTemplate", + "version": "v3" + } + }, + "delete": { + "description": "delete a RoleTemplate", + "consumes": [ + "application/json", + "application/yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "deleteManagementCattleIoV3RoleTemplate", + "parameters": [ + { + "name": "body", + "in": "body", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions" + } + }, + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" + }, + { + "uniqueItems": true, + "type": "integer", + "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", + "name": "gracePeriodSeconds", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", + "name": "orphanDependents", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", + "name": "propagationPolicy", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } + }, + "202": { + "description": "Accepted", + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Status" + } + }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "delete", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "RoleTemplate", + "version": "v3" + } + }, + "patch": { + "description": "partially update the specified RoleTemplate", + "consumes": [ + "application/json-patch+json", + "application/merge-patch+json", + "application/apply-patch+yaml" + ], + "produces": [ + "application/json", + "application/yaml" + ], + "schemes": [ + "https" + ], + "tags": [ + "managementCattleIo_v3" + ], + "operationId": "patchManagementCattleIoV3RoleTemplate", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Patch" + } + }, + { + "uniqueItems": true, + "type": "string", + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "name": "dryRun", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).", + "name": "fieldManager", + "in": "query" + }, + { + "uniqueItems": true, + "type": "string", + "description": "fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.", + "name": "fieldValidation", + "in": "query" + }, + { + "uniqueItems": true, + "type": "boolean", + "description": "Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.", + "name": "force", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" + } + }, + "401": { + "description": "Unauthorized" + } + }, + "x-kubernetes-action": "patch", + "x-kubernetes-group-version-kind": { + "group": "management.cattle.io", + "kind": "RoleTemplate", + "version": "v3" + } + }, + "parameters": [ + { + "uniqueItems": true, + "type": "string", + "description": "name of the RoleTemplate", + "name": "name", + "in": "path", + "required": true + }, + { + "uniqueItems": true, + "type": "string", + "description": "If 'true', then the output is pretty printed.", + "name": "pretty", + "in": "query" + } + ] + } + }, + "definitions": { + "io.cattle.management.v3.ClusterRoleTemplateBinding": { + "description": "ClusterRoleTemplateBinding is the object representing membership of a subject in a cluster with permissions specified by a given role template.", + "type": "object", + "required": [ + "clusterName", + "roleTemplateName" + ], + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "clusterName": { + "description": "ClusterName is the metadata.name of the cluster to which a subject is added. Must match the namespace. Immutable.", + "type": "string" + }, + "groupName": { + "description": "GroupName is the name of the group subject added to the cluster. Immutable.", + "type": "string" + }, + "groupPrincipalName": { + "description": "GroupPrincipalName is the name of the group principal subject added to the cluster. Immutable.", + "type": "string" + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + }, + "roleTemplateName": { + "description": "RoleTemplateName is the name of the role template that defines permissions to perform actions on resources in the cluster. Immutable.", + "type": "string" + }, + "userName": { + "description": "UserName is the name of the user subject added to the cluster. Immutable.", + "type": "string" + }, + "userPrincipalName": { + "description": "UserPrincipalName is the name of the user principal subject added to the cluster. Immutable.", + "type": "string" + } + }, + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "ClusterRoleTemplateBinding", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.ClusterRoleTemplateBindingList": { + "description": "ClusterRoleTemplateBindingList is a list of ClusterRoleTemplateBinding", + "type": "object", + "required": [ + "items" + ], + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "items": { + "description": "List of clusterroletemplatebindings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", + "type": "array", + "items": { + "$ref": "#/definitions/io.cattle.management.v3.ClusterRoleTemplateBinding" + } + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" + } + }, + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "ClusterRoleTemplateBindingList", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.GlobalRole": { + "description": "GlobalRole defines rules that can be applied to the local cluster and or every downstream cluster.", + "type": "object", + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "builtin": { + "description": "Builtin specifies that this GlobalRole was created by Rancher if true. Immutable.", + "type": "boolean" + }, + "description": { + "description": "Description holds text that describes the resource.", + "type": "string" + }, + "displayName": { + "description": "DisplayName is the human-readable name displayed in the UI for this resource.", + "type": "string" + }, + "inheritedClusterRoles": { + "description": "InheritedClusterRoles are the names of RoleTemplates whose permissions are granted by this GlobalRole in every cluster besides the local cluster. To grant permissions in the local cluster, use the Rules field.", + "type": "array", + "items": { + "type": "string" + } + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + }, + "newUserDefault": { + "description": "NewUserDefault specifies that all new users created should be bound to this GlobalRole if true.", + "type": "boolean" + }, + "rules": { + "description": "Rules holds a list of PolicyRules that are applied to the local cluster only.", + "type": "array", + "items": { + "description": "PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to.", + "type": "object", + "required": [ + "verbs" + ], + "properties": { + "apiGroups": { + "description": "APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. \"\" represents the core API group and \"*\" represents all API groups.", + "type": "array", + "items": { + "type": "string" + } + }, + "nonResourceURLs": { + "description": "NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as \"pods\" or \"secrets\") or non-resource URL paths (such as \"/api\"), but not both.", + "type": "array", + "items": { + "type": "string" + } + }, + "resourceNames": { + "description": "ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed.", + "type": "array", + "items": { + "type": "string" + } + }, + "resources": { + "description": "Resources is a list of resources this rule applies to. '*' represents all resources.", + "type": "array", + "items": { + "type": "string" + } + }, + "verbs": { + "description": "Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs.", + "type": "array", + "items": { + "type": "string" + } + } + } + } + } + }, + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "GlobalRole", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.GlobalRoleBinding": { + "description": "GlobalRoleBinding binds a given subject user or group to a GlobalRole.", + "type": "object", + "required": [ + "globalRoleName" + ], + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "globalRoleName": { + "description": "GlobalRoleName is the name of the Global Role that the subject will be bound to. Immutable.", + "type": "string" + }, + "groupPrincipalName": { + "description": "GroupPrincipalName is the name of the group principal subject to be bound. Immutable.", + "type": "string" + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + }, + "userName": { + "description": "UserName is the name of the user subject to be bound. Immutable.", + "type": "string" + } + }, + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "GlobalRoleBinding", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.GlobalRoleBindingList": { + "description": "GlobalRoleBindingList is a list of GlobalRoleBinding", + "type": "object", + "required": [ + "items" + ], + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "items": { + "description": "List of globalrolebindings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", + "type": "array", + "items": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRoleBinding" + } + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" + } + }, + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "GlobalRoleBindingList", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.GlobalRoleList": { + "description": "GlobalRoleList is a list of GlobalRole", + "type": "object", + "required": [ + "items" + ], + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "items": { + "description": "List of globalroles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", + "type": "array", + "items": { + "$ref": "#/definitions/io.cattle.management.v3.GlobalRole" + } + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" + } + }, + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "GlobalRoleList", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.Project": { + "description": "Project is a group of namespaces. Projects are used to create a multi-tenant environment within a Kubernetes cluster by managing namespace operations, such as role assignments or quotas, as a group.", + "type": "object", + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + }, + "spec": { + "description": "Spec is the specification of the desired configuration for the project.", + "type": "object", + "required": [ + "clusterName", + "displayName" + ], + "properties": { + "clusterName": { + "description": "ClusterName is the name of the cluster the project belongs to. Immutable.", + "type": "string" + }, + "containerDefaultResourceLimit": { + "description": "ContainerDefaultResourceLimit is a specification for the default LimitRange for the namespace. See https://kubernetes.io/docs/concepts/policy/limit-range/ for more details.", + "type": "object", + "properties": { + "limitsCpu": { + "description": "LimitsCPU is the CPU limits across all pods in a non-terminal state.", + "type": "string" + }, + "limitsMemory": { + "description": "LimitsMemory is the memory limits across all pods in a non-terminal state.", + "type": "string" + }, + "requestsCpu": { + "description": "RequestsCPU is the CPU requests limit across all pods in a non-terminal state.", + "type": "string" + }, + "requestsMemory": { + "description": "RequestsMemory is the memory requests limit across all pods in a non-terminal state.", + "type": "string" + } + } + }, + "description": { + "description": "Description is a human-readable description of the project.", + "type": "string" + }, + "displayName": { + "description": "DisplayName is the human-readable name for the project.", + "type": "string" + }, + "enableProjectMonitoring": { + "description": "EnableProjectMonitoring indicates whether Monitoring V1 should be enabled for this project. Deprecated. Use the Monitoring V2 app instead. Defaults to false.", + "type": "boolean" + }, + "namespaceDefaultResourceQuota": { + "description": "NamespaceDefaultResourceQuota is a specification of the default ResourceQuota that a namespace will receive if none is provided. Must provide ResourceQuota if NamespaceDefaultResourceQuota is specified. See https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more details.", + "type": "object", + "properties": { + "limit": { + "description": "Limit is the default quota limits applied to new namespaces.", + "type": "object", + "properties": { + "configMaps": { + "description": "ConfigMaps is the total number of ReplicationControllers that can exist in the namespace.", + "type": "string" + }, + "limitsCpu": { + "description": "LimitsCPU is the CPU limits across all pods in a non-terminal state.", + "type": "string" + }, + "limitsMemory": { + "description": "LimitsMemory is the memory limits across all pods in a non-terminal state.", + "type": "string" + }, + "persistentVolumeClaims": { + "description": "PersistentVolumeClaims is the total number of PersistentVolumeClaims that can exist in the namespace.", + "type": "string" + }, + "pods": { + "description": "Pods is the total number of Pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if .status.phase in (Failed, Succeeded) is true.", + "type": "string" + }, + "replicationControllers": { + "description": "ReplicationControllers is total number of ReplicationControllers that can exist in the namespace.", + "type": "string" + }, + "requestsCpu": { + "description": "RequestsCPU is the CPU requests limit across all pods in a non-terminal state.", + "type": "string" + }, + "requestsMemory": { + "description": "RequestsMemory is the memory requests limit across all pods in a non-terminal state.", + "type": "string" + }, + "requestsStorage": { + "description": "RequestsStorage is the storage requests limit across all persistent volume claims.", + "type": "string" + }, + "secrets": { + "description": "Secrets is the total number of ReplicationControllers that can exist in the namespace.", + "type": "string" + }, + "services": { + "description": "Services is the total number of Services that can exist in the namespace.", + "type": "string" + }, + "servicesLoadBalancers": { + "description": "ServicesLoadBalancers is the total number of Services of type LoadBalancer that can exist in the namespace.", + "type": "string" + }, + "servicesNodePorts": { + "description": "ServiceNodePorts is the total number of Services of type NodePort that can exist in the namespace.", + "type": "string" + } + } + } + } + }, + "resourceQuota": { + "description": "ResourceQuota is a specification for the total amount of quota for standard resources that will be shared by all namespaces in the project. Must provide NamespaceDefaultResourceQuota if ResourceQuota is specified. See https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more details.", + "type": "object", + "properties": { + "limit": { + "description": "Limit is the total allowable quota limits shared by all namespaces in the project.", + "type": "object", + "properties": { + "configMaps": { + "description": "ConfigMaps is the total number of ReplicationControllers that can exist in the namespace.", + "type": "string" + }, + "limitsCpu": { + "description": "LimitsCPU is the CPU limits across all pods in a non-terminal state.", + "type": "string" + }, + "limitsMemory": { + "description": "LimitsMemory is the memory limits across all pods in a non-terminal state.", + "type": "string" + }, + "persistentVolumeClaims": { + "description": "PersistentVolumeClaims is the total number of PersistentVolumeClaims that can exist in the namespace.", + "type": "string" + }, + "pods": { + "description": "Pods is the total number of Pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if .status.phase in (Failed, Succeeded) is true.", + "type": "string" + }, + "replicationControllers": { + "description": "ReplicationControllers is total number of ReplicationControllers that can exist in the namespace.", + "type": "string" + }, + "requestsCpu": { + "description": "RequestsCPU is the CPU requests limit across all pods in a non-terminal state.", + "type": "string" + }, + "requestsMemory": { + "description": "RequestsMemory is the memory requests limit across all pods in a non-terminal state.", + "type": "string" + }, + "requestsStorage": { + "description": "RequestsStorage is the storage requests limit across all persistent volume claims.", + "type": "string" + }, + "secrets": { + "description": "Secrets is the total number of ReplicationControllers that can exist in the namespace.", + "type": "string" + }, + "services": { + "description": "Services is the total number of Services that can exist in the namespace.", + "type": "string" + }, + "servicesLoadBalancers": { + "description": "ServicesLoadBalancers is the total number of Services of type LoadBalancer that can exist in the namespace.", + "type": "string" + }, + "servicesNodePorts": { + "description": "ServiceNodePorts is the total number of Services of type NodePort that can exist in the namespace.", + "type": "string" + } + } + }, + "usedLimit": { + "description": "UsedLimit is the currently allocated quota for all namespaces in the project.", + "type": "object", + "properties": { + "configMaps": { + "description": "ConfigMaps is the total number of ReplicationControllers that can exist in the namespace.", + "type": "string" + }, + "limitsCpu": { + "description": "LimitsCPU is the CPU limits across all pods in a non-terminal state.", + "type": "string" + }, + "limitsMemory": { + "description": "LimitsMemory is the memory limits across all pods in a non-terminal state.", + "type": "string" + }, + "persistentVolumeClaims": { + "description": "PersistentVolumeClaims is the total number of PersistentVolumeClaims that can exist in the namespace.", + "type": "string" + }, + "pods": { + "description": "Pods is the total number of Pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if .status.phase in (Failed, Succeeded) is true.", + "type": "string" + }, + "replicationControllers": { + "description": "ReplicationControllers is total number of ReplicationControllers that can exist in the namespace.", + "type": "string" + }, + "requestsCpu": { + "description": "RequestsCPU is the CPU requests limit across all pods in a non-terminal state.", + "type": "string" + }, + "requestsMemory": { + "description": "RequestsMemory is the memory requests limit across all pods in a non-terminal state.", + "type": "string" + }, + "requestsStorage": { + "description": "RequestsStorage is the storage requests limit across all persistent volume claims.", + "type": "string" + }, + "secrets": { + "description": "Secrets is the total number of ReplicationControllers that can exist in the namespace.", + "type": "string" + }, + "services": { + "description": "Services is the total number of Services that can exist in the namespace.", + "type": "string" + }, + "servicesLoadBalancers": { + "description": "ServicesLoadBalancers is the total number of Services of type LoadBalancer that can exist in the namespace.", + "type": "string" + }, + "servicesNodePorts": { + "description": "ServiceNodePorts is the total number of Services of type NodePort that can exist in the namespace.", + "type": "string" + } + } + } + } + } + } + }, + "status": { + "description": "Status is the most recently observed status of the project.", + "type": "object", + "properties": { + "conditions": { + "description": "Conditions are a set of indicators about aspects of the project.", + "type": "array", + "items": { + "description": "ProjectCondition is the status of an aspect of the project.", + "type": "object", + "required": [ + "status", + "type" + ], + "properties": { + "lastTransitionTime": { + "description": "Last time the condition transitioned from one status to another.", + "type": "string" + }, + "lastUpdateTime": { + "description": "The last time this condition was updated.", + "type": "string" + }, + "message": { + "description": "Human-readable message indicating details about last transition.", + "type": "string" + }, + "reason": { + "description": "The reason for the condition's last transition.", + "type": "string" + }, + "status": { + "description": "Status of the condition, one of True, False, Unknown.", + "type": "string" + }, + "type": { + "description": "Type of project condition.", + "type": "string" + } + } + } + }, + "monitoringStatus": { + "description": "MonitoringStatus is the status of the Monitoring V1 app.", + "type": "object", + "properties": { + "conditions": { + "type": "array", + "items": { + "type": "object", + "required": [ + "status", + "type" + ], + "properties": { + "lastTransitionTime": { + "description": "Last time the condition transitioned from one status to another.", + "type": "string" + }, + "lastUpdateTime": { + "description": "The last time this condition was updated.", + "type": "string" + }, + "message": { + "description": "Human-readable message indicating details about last transition", + "type": "string" + }, + "reason": { + "description": "The reason for the condition's last transition.", + "type": "string" + }, + "status": { + "description": "Status of the condition, one of True, False, Unknown.", + "type": "string" + }, + "type": { + "description": "Type of cluster condition.", + "type": "string" + } + } + } + }, + "grafanaEndpoint": { + "type": "string" + } + } + }, + "podSecurityPolicyTemplateId": { + "description": "PodSecurityPolicyTemplateName is the pod security policy template associated with the project.", + "type": "string" + } } } }, - "io.k8s.apimachinery.pkg.apis.meta.v1.OwnerReference": { - "description": "OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field.", - "type": "object", - "required": [ - "apiVersion", - "kind", - "name", - "uid" - ], - "properties": { - "apiVersion": { - "description": "API version of the referent.", - "type": "string" - }, - "blockOwnerDeletion": { - "description": "If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned.", - "type": "boolean" - }, - "controller": { - "description": "If true, this reference points to the managing controller.", - "type": "boolean" - }, - "kind": { - "description": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" - }, - "name": { - "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names", - "type": "string" - }, - "uid": { - "description": "UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids", - "type": "string" + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "Project", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.ProjectList": { + "description": "ProjectList is a list of Project", + "type": "object", + "required": [ + "items" + ], + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "items": { + "description": "List of projects. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", + "type": "array", + "items": { + "$ref": "#/definitions/io.cattle.management.v3.Project" } }, - "x-kubernetes-map-type": "atomic" + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" + } }, - "io.k8s.apimachinery.pkg.apis.meta.v1.Patch": { - "description": "Patch is provided to give a concrete name and type to the Kubernetes PATCH request body.", - "type": "object" + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "ProjectList", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.ProjectRoleTemplateBinding": { + "description": "ProjectRoleTemplateBinding is the object representing membership of a subject in a project with permissions specified by a given role template.", + "type": "object", + "required": [ + "projectName", + "roleTemplateName" + ], + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "groupName": { + "description": "GroupName is the name of the group subject added to the project. Immutable.", + "type": "string" + }, + "groupPrincipalName": { + "description": "GroupPrincipalName is the name of the group principal subject added to the project. Immutable.", + "type": "string" + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + }, + "projectName": { + "description": "ProjectName is the name of the project to which a subject is added. Immutable.", + "type": "string" + }, + "roleTemplateName": { + "description": "RoleTemplateName is the name of the role template that defines permissions to perform actions on resources in the project. Immutable.", + "type": "string" + }, + "serviceAccount": { + "description": "ServiceAccount is the name of the service account bound as a subject. Immutable. Deprecated.", + "type": "string" + }, + "userName": { + "description": "UserName is the name of the user subject added to the project. Immutable.", + "type": "string" + }, + "userPrincipalName": { + "description": "UserPrincipalName is the name of the user principal subject added to the project. Immutable.", + "type": "string" + } }, - "io.k8s.apimachinery.pkg.apis.meta.v1.Preconditions": { - "description": "Preconditions must be fulfilled before an operation (update, delete, etc.) is carried out.", - "type": "object", - "properties": { - "resourceVersion": { - "description": "Specifies the target ResourceVersion", - "type": "string" - }, - "uid": { - "description": "Specifies the target UID.", - "type": "string" + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "ProjectRoleTemplateBinding", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.ProjectRoleTemplateBindingList": { + "description": "ProjectRoleTemplateBindingList is a list of ProjectRoleTemplateBinding", + "type": "object", + "required": [ + "items" + ], + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "items": { + "description": "List of projectroletemplatebindings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", + "type": "array", + "items": { + "$ref": "#/definitions/io.cattle.management.v3.ProjectRoleTemplateBinding" } + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" } }, - "io.k8s.apimachinery.pkg.apis.meta.v1.Status": { - "description": "Status is a return value for calls that don't return other objects.", - "type": "object", - "properties": { - "apiVersion": { - "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", - "type": "string" - }, - "code": { - "description": "Suggested HTTP return code for this status, 0 if not set.", - "type": "integer", - "format": "int32" - }, - "details": { - "description": "Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type.", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.StatusDetails" - }, - "kind": { - "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "type": "string" - }, - "message": { - "description": "A human-readable description of the status of this operation.", - "type": "string" - }, - "metadata": { - "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" - }, - "reason": { - "description": "A machine-readable description of why this operation is in the \"Failure\" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it.", - "type": "string" - }, - "status": { - "description": "Status of the operation. One of: \"Success\" or \"Failure\". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status", + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "ProjectRoleTemplateBindingList", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.RoleTemplate": { + "description": "RoleTemplate holds configuration for a template that is used to create kubernetes Roles and ClusterRoles (in the rbac.authorization.k8s.io group) for a cluster or project.", + "type": "object", + "properties": { + "administrative": { + "description": "Administrative if false, and context is set to cluster this RoleTemplate will not grant access to \"CatalogTemplates\" and \"CatalogTemplateVersions\" for any project in the cluster. Default is false.", + "type": "boolean" + }, + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "builtin": { + "description": "Builtin if true specifies that this RoleTemplate was created by Rancher and is immutable. Default to false.", + "type": "boolean" + }, + "clusterCreatorDefault": { + "description": "ClusterCreatorDefault if true, a binding with this RoleTemplate will be created for a users when they create a new cluster. ClusterCreatorDefault is only evaluated if the context of the RoleTemplate is set to cluster. Default to false.", + "type": "boolean" + }, + "context": { + "description": "Context describes if the roleTemplate applies to clusters or projects. Valid values are \"project\", \"cluster\" or \"\".", + "type": "string", + "enum": [ + "project", + "cluster", + "" + ] + }, + "description": { + "description": "Description holds text that describes the resource.", + "type": "string" + }, + "displayName": { + "description": "DisplayName is the human-readable name displayed in the UI for this resource.", + "type": "string" + }, + "external": { + "description": "External if true specifies that rules for this RoleTemplate should be gathered from a ClusterRole with the matching name. If set to true the Rules on the template will not be evaluated. External's value is only evaluated if the RoleTemplate's context is set to \"cluster\" Default to false.", + "type": "boolean" + }, + "hidden": { + "description": "Hidden if true informs the Rancher UI not to display this RoleTemplate. Default to false.", + "type": "boolean" + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "locked": { + "description": "Locked if true, new bindings will not be able to use this RoleTemplate. Default to false.", + "type": "boolean" + }, + "metadata": { + "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" + }, + "projectCreatorDefault": { + "description": "ProjectCreatorDefault if true, a binding with this RoleTemplate will be created for a user when they create a new project. ProjectCreatorDefault is only evaluated if the context of the RoleTemplate is set to project. Default to false.", + "type": "boolean" + }, + "roleTemplateNames": { + "description": "RoleTemplateNames list of RoleTemplate names that this RoleTemplate will inherit. This RoleTemplate will grant all rules defined in an inherited RoleTemplate. Inherited RoleTemplates must already exist.", + "type": "array", + "items": { "type": "string" } }, - "x-kubernetes-group-version-kind": [ - { - "group": "", - "kind": "Status", - "version": "v1" - }, - { - "group": "resource.k8s.io", - "kind": "Status", - "version": "v1alpha2" + "rules": { + "description": "Rules hold all the PolicyRules for this RoleTemplate.", + "type": "array", + "items": { + "description": "PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to.", + "type": "object", + "required": [ + "verbs" + ], + "properties": { + "apiGroups": { + "description": "APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. \"\" represents the core API group and \"*\" represents all API groups.", + "type": "array", + "items": { + "type": "string" + } + }, + "nonResourceURLs": { + "description": "NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as \"pods\" or \"secrets\") or non-resource URL paths (such as \"/api\"), but not both.", + "type": "array", + "items": { + "type": "string" + } + }, + "resourceNames": { + "description": "ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed.", + "type": "array", + "items": { + "type": "string" + } + }, + "resources": { + "description": "Resources is a list of resources this rule applies to. '*' represents all resources.", + "type": "array", + "items": { + "type": "string" + } + }, + "verbs": { + "description": "Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs.", + "type": "array", + "items": { + "type": "string" + } + } + } + } + } + }, + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "RoleTemplate", + "version": "v3" + } + ] + }, + "io.cattle.management.v3.RoleTemplateList": { + "description": "RoleTemplateList is a list of RoleTemplate", + "type": "object", + "required": [ + "items" + ], + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "items": { + "description": "List of roletemplates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", + "type": "array", + "items": { + "$ref": "#/definitions/io.cattle.management.v3.RoleTemplate" } - ] + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "metadata": { + "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" + } }, - "io.k8s.apimachinery.pkg.apis.meta.v1.StatusCause": { - "description": "StatusCause provides more information about an api.Status failure, including cases when multiple errors are encountered.", - "type": "object", - "properties": { - "field": { - "description": "The field of the resource that has caused this error, as named by its JSON serialization. May include dot and postfix notation for nested attributes. Arrays are zero-indexed. Fields may appear more than once in an array of causes due to fields having multiple errors. Optional.\n\nExamples:\n \"name\" - the field \"name\" on the current resource\n \"items[0].name\" - the field \"name\" on the first array entry in \"items\"", - "type": "string" - }, - "message": { - "description": "A human-readable description of the cause of the error. This field may be presented as-is to a reader.", - "type": "string" - }, - "reason": { - "description": "A machine-readable description of the cause of the error. If this value is empty there is no information available.", + "x-kubernetes-group-version-kind": [ + { + "group": "management.cattle.io", + "kind": "RoleTemplateList", + "version": "v3" + } + ] + }, + "io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions": { + "description": "DeleteOptions may be provided when deleting an API object.", + "type": "object", + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "dryRun": { + "description": "When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed", + "type": "array", + "items": { "type": "string" } + }, + "gracePeriodSeconds": { + "description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.", + "type": "integer", + "format": "int64" + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "orphanDependents": { + "description": "Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.", + "type": "boolean" + }, + "preconditions": { + "description": "Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned.", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Preconditions" + }, + "propagationPolicy": { + "description": "Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.", + "type": "string" } }, - "io.k8s.apimachinery.pkg.apis.meta.v1.StatusDetails": { - "description": "StatusDetails is a set of additional properties that MAY be set by the server to provide additional information about a response. The Reason field of a Status object defines what attributes will be set. Clients must ignore fields that do not match the defined type of each attribute, and should assume that any attribute may be empty, invalid, or under defined.", - "type": "object", - "properties": { - "causes": { - "description": "The Causes array includes more details associated with the StatusReason failure. Not all StatusReasons may provide detailed causes.", - "type": "array", - "items": { - "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.StatusCause" - } - }, - "group": { - "description": "The group attribute of the resource associated with the status StatusReason.", - "type": "string" - }, - "kind": { - "description": "The kind attribute of the resource associated with the status StatusReason. On some operations may differ from the requested resource Kind. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "x-kubernetes-group-version-kind": [ + { + "group": "", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "admission.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "admission.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "admissionregistration.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "admissionregistration.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "admissionregistration.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "apiextensions.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "apiextensions.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "apiregistration.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "apiregistration.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "apps", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "apps", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "apps", + "kind": "DeleteOptions", + "version": "v1beta2" + }, + { + "group": "authentication.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "authentication.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "authentication.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "authorization.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "authorization.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "autoscaling", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "autoscaling", + "kind": "DeleteOptions", + "version": "v2" + }, + { + "group": "autoscaling", + "kind": "DeleteOptions", + "version": "v2beta1" + }, + { + "group": "autoscaling", + "kind": "DeleteOptions", + "version": "v2beta2" + }, + { + "group": "batch", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "batch", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "certificates.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "certificates.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "certificates.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "coordination.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "coordination.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "discovery.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "discovery.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "events.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "events.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "extensions", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "flowcontrol.apiserver.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "flowcontrol.apiserver.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "flowcontrol.apiserver.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta2" + }, + { + "group": "flowcontrol.apiserver.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta3" + }, + { + "group": "imagepolicy.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "internal.apiserver.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "networking.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "networking.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "networking.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "node.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "node.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "node.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "policy", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "policy", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "rbac.authorization.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "rbac.authorization.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "rbac.authorization.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "resource.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha2" + }, + { + "group": "scheduling.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "scheduling.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "scheduling.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + }, + { + "group": "storage.k8s.io", + "kind": "DeleteOptions", + "version": "v1" + }, + { + "group": "storage.k8s.io", + "kind": "DeleteOptions", + "version": "v1alpha1" + }, + { + "group": "storage.k8s.io", + "kind": "DeleteOptions", + "version": "v1beta1" + } + ] + }, + "io.k8s.apimachinery.pkg.apis.meta.v1.FieldsV1": { + "description": "FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format.\n\nEach key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f:\u003cname\u003e', where \u003cname\u003e is the name of a field in a struct, or key in a map 'v:\u003cvalue\u003e', where \u003cvalue\u003e is the exact json formatted value of a list item 'i:\u003cindex\u003e', where \u003cindex\u003e is position of a item in a list 'k:\u003ckeys\u003e', where \u003ckeys\u003e is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set.\n\nThe exact format is defined in sigs.k8s.io/structured-merge-diff", + "type": "object" + }, + "io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta": { + "description": "ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}.", + "type": "object", + "properties": { + "continue": { + "description": "continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the next set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message.", + "type": "string" + }, + "remainingItemCount": { + "description": "remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is *estimating* the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact.", + "type": "integer", + "format": "int64" + }, + "resourceVersion": { + "description": "String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency", + "type": "string" + }, + "selfLink": { + "description": "Deprecated: selfLink is a legacy read-only field that is no longer populated by the system.", + "type": "string" + } + } + }, + "io.k8s.apimachinery.pkg.apis.meta.v1.ManagedFieldsEntry": { + "description": "ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to.", + "type": "object", + "properties": { + "apiVersion": { + "description": "APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted.", + "type": "string" + }, + "fieldsType": { + "description": "FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\"", + "type": "string" + }, + "fieldsV1": { + "description": "FieldsV1 holds the first JSON version format as described in the \"FieldsV1\" type.", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.FieldsV1" + }, + "manager": { + "description": "Manager is an identifier of the workflow managing these fields.", + "type": "string" + }, + "operation": { + "description": "Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'.", + "type": "string" + }, + "subresource": { + "description": "Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource.", + "type": "string" + }, + "time": { + "description": "Time is the timestamp of when the ManagedFields entry was added. The timestamp will also be updated if a field is added, the manager changes any of the owned fields value or removes a field. The timestamp does not update when a field is removed from the entry because another manager took it over.", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Time" + } + } + }, + "io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta": { + "description": "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.", + "type": "object", + "properties": { + "annotations": { + "description": "Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations", + "type": "object", + "additionalProperties": { "type": "string" - }, - "name": { - "description": "The name attribute of the resource associated with the status StatusReason (when there is a single name which can be described).", + } + }, + "creationTimestamp": { + "description": "CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Time" + }, + "deletionGracePeriodSeconds": { + "description": "Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only.", + "type": "integer", + "format": "int64" + }, + "deletionTimestamp": { + "description": "DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.\n\nPopulated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.Time" + }, + "finalizers": { + "description": "Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list.", + "type": "array", + "items": { "type": "string" }, - "retryAfterSeconds": { - "description": "If specified, the time in seconds before the operation should be retried. Some errors may indicate the client must take an alternate action - for those errors this field may indicate how long to wait before taking the alternate action.", - "type": "integer", - "format": "int32" - }, - "uid": { - "description": "UID of the resource. (when there is a single resource which can be described). More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids", + "x-kubernetes-patch-strategy": "merge" + }, + "generateName": { + "description": "GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will return a 409.\n\nApplied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency", + "type": "string" + }, + "generation": { + "description": "A sequence number representing a specific generation of the desired state. Populated by the system. Read-only.", + "type": "integer", + "format": "int64" + }, + "labels": { + "description": "Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels", + "type": "object", + "additionalProperties": { "type": "string" } + }, + "managedFields": { + "description": "ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object.", + "type": "array", + "items": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ManagedFieldsEntry" + } + }, + "name": { + "description": "Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names", + "type": "string" + }, + "namespace": { + "description": "Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n\nMust be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", + "type": "string" + }, + "ownerReferences": { + "description": "List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller.", + "type": "array", + "items": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.OwnerReference" + }, + "x-kubernetes-patch-merge-key": "uid", + "x-kubernetes-patch-strategy": "merge" + }, + "resourceVersion": { + "description": "An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency", + "type": "string" + }, + "selfLink": { + "description": "Deprecated: selfLink is a legacy read-only field that is no longer populated by the system.", + "type": "string" + }, + "uid": { + "description": "UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n\nPopulated by the system. Read-only. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids", + "type": "string" + } + } + }, + "io.k8s.apimachinery.pkg.apis.meta.v1.OwnerReference": { + "description": "OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field.", + "type": "object", + "required": [ + "apiVersion", + "kind", + "name", + "uid" + ], + "properties": { + "apiVersion": { + "description": "API version of the referent.", + "type": "string" + }, + "blockOwnerDeletion": { + "description": "If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned.", + "type": "boolean" + }, + "controller": { + "description": "If true, this reference points to the managing controller.", + "type": "boolean" + }, + "kind": { + "description": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "name": { + "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names", + "type": "string" + }, + "uid": { + "description": "UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids", + "type": "string" } }, - "io.k8s.apimachinery.pkg.apis.meta.v1.Time": { - "description": "Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.", - "type": "string", - "format": "date-time" + "x-kubernetes-map-type": "atomic" + }, + "io.k8s.apimachinery.pkg.apis.meta.v1.Patch": { + "description": "Patch is provided to give a concrete name and type to the Kubernetes PATCH request body.", + "type": "object" + }, + "io.k8s.apimachinery.pkg.apis.meta.v1.Preconditions": { + "description": "Preconditions must be fulfilled before an operation (update, delete, etc.) is carried out.", + "type": "object", + "properties": { + "resourceVersion": { + "description": "Specifies the target ResourceVersion", + "type": "string" + }, + "uid": { + "description": "Specifies the target UID.", + "type": "string" + } } }, - "securityDefinitions": { - "BearerToken": { - "description": "Bearer Token authentication", - "type": "apiKey", - "name": "authorization", - "in": "header" + "io.k8s.apimachinery.pkg.apis.meta.v1.Status": { + "description": "Status is a return value for calls that don't return other objects.", + "type": "object", + "properties": { + "apiVersion": { + "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", + "type": "string" + }, + "code": { + "description": "Suggested HTTP return code for this status, 0 if not set.", + "type": "integer", + "format": "int32" + }, + "details": { + "description": "Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type.", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.StatusDetails" + }, + "kind": { + "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "message": { + "description": "A human-readable description of the status of this operation.", + "type": "string" + }, + "metadata": { + "description": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta" + }, + "reason": { + "description": "A machine-readable description of why this operation is in the \"Failure\" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it.", + "type": "string" + }, + "status": { + "description": "Status of the operation. One of: \"Success\" or \"Failure\". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status", + "type": "string" + } + }, + "x-kubernetes-group-version-kind": [ + { + "group": "", + "kind": "Status", + "version": "v1" + }, + { + "group": "resource.k8s.io", + "kind": "Status", + "version": "v1alpha2" + } + ] + }, + "io.k8s.apimachinery.pkg.apis.meta.v1.StatusCause": { + "description": "StatusCause provides more information about an api.Status failure, including cases when multiple errors are encountered.", + "type": "object", + "properties": { + "field": { + "description": "The field of the resource that has caused this error, as named by its JSON serialization. May include dot and postfix notation for nested attributes. Arrays are zero-indexed. Fields may appear more than once in an array of causes due to fields having multiple errors. Optional.\n\nExamples:\n \"name\" - the field \"name\" on the current resource\n \"items[0].name\" - the field \"name\" on the first array entry in \"items\"", + "type": "string" + }, + "message": { + "description": "A human-readable description of the cause of the error. This field may be presented as-is to a reader.", + "type": "string" + }, + "reason": { + "description": "A machine-readable description of the cause of the error. If this value is empty there is no information available.", + "type": "string" + } } }, - "security": [ - { - "BearerToken": [] + "io.k8s.apimachinery.pkg.apis.meta.v1.StatusDetails": { + "description": "StatusDetails is a set of additional properties that MAY be set by the server to provide additional information about a response. The Reason field of a Status object defines what attributes will be set. Clients must ignore fields that do not match the defined type of each attribute, and should assume that any attribute may be empty, invalid, or under defined.", + "type": "object", + "properties": { + "causes": { + "description": "The Causes array includes more details associated with the StatusReason failure. Not all StatusReasons may provide detailed causes.", + "type": "array", + "items": { + "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.StatusCause" + } + }, + "group": { + "description": "The group attribute of the resource associated with the status StatusReason.", + "type": "string" + }, + "kind": { + "description": "The kind attribute of the resource associated with the status StatusReason. On some operations may differ from the requested resource Kind. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "type": "string" + }, + "name": { + "description": "The name attribute of the resource associated with the status StatusReason (when there is a single name which can be described).", + "type": "string" + }, + "retryAfterSeconds": { + "description": "If specified, the time in seconds before the operation should be retried. Some errors may indicate the client must take an alternate action - for those errors this field may indicate how long to wait before taking the alternate action.", + "type": "integer", + "format": "int32" + }, + "uid": { + "description": "UID of the resource. (when there is a single resource which can be described). More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids", + "type": "string" + } } - ] - } \ No newline at end of file + }, + "io.k8s.apimachinery.pkg.apis.meta.v1.Time": { + "description": "Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.", + "type": "string", + "format": "date-time" + } + }, + "securityDefinitions": { + "BearerToken": { + "description": "Bearer Token authentication", + "type": "apiKey", + "name": "authorization", + "in": "header" + } + }, + "security": [ + { + "BearerToken": [] + } + ] +} \ No newline at end of file From 3cd88f90287d77ad4e96ad1d1970705774950c1e Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Mon, 20 Nov 2023 18:27:34 -0800 Subject: [PATCH 36/65] Fix etcd.backup_config.retention description --- .../back-up-rancher-launched-kubernetes-clusters.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md index 6f799ddaa798..dd5e55cefe92 100644 --- a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md +++ b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md @@ -67,7 +67,7 @@ The steps to enable recurring snapshots differ based on the version of RKE. backup_config: enabled: true # enables recurring etcd snapshots interval_hours: 6 # time increment between snapshots - retention: 60 # time in days before snapshot purge + retention: 6 # number of snapshots to retain before rotation # Optional S3 s3backupconfig: access_key: "myaccesskey" From 6fc757d01b95f25e4561312d65fe0a35a26aa2d4 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Tue, 21 Nov 2023 13:22:47 -0800 Subject: [PATCH 37/65] Add Projects API workflow example --- .../version-2.8/api/workflows/projects.md | 109 ++++++++++++++++++ 1 file changed, 109 insertions(+) create mode 100644 versioned_docs/version-2.8/api/workflows/projects.md diff --git a/versioned_docs/version-2.8/api/workflows/projects.md b/versioned_docs/version-2.8/api/workflows/projects.md new file mode 100644 index 000000000000..9af2f25cbe97 --- /dev/null +++ b/versioned_docs/version-2.8/api/workflows/projects.md @@ -0,0 +1,109 @@ +--- +title: Projects +--- + +## Creating a Project + +Project resources may only be created on the management cluster. See below for [creating namespaces under projects in a managed cluster](#creating-a-namespace-in-a-project). + +### Creating a Basic Project + +```bash +kubectl create -f - <:`. + +## Deleting a Project + +Look up the project to delete in the cluster namespace since it generated using `metadata.generateName`: + +```bash +kubectl --namespace c-m-abcde get projects +``` + +Delete the project under the cluster namespace: + +```bash +kubectl --namespace c-m-abcde delete project p-vwxyz +``` From f28cb43e678d4b5ff7c4d9d42ab1f10b2d06adf8 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Tue, 21 Nov 2023 13:27:43 -0800 Subject: [PATCH 38/65] Add project workflow to sidebar; remove duplicate API category --- versioned_sidebars/version-2.8-sidebars.json | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/versioned_sidebars/version-2.8-sidebars.json b/versioned_sidebars/version-2.8-sidebars.json index 0e0f3c923c83..d29e846ed9a0 100644 --- a/versioned_sidebars/version-2.8-sidebars.json +++ b/versioned_sidebars/version-2.8-sidebars.json @@ -1311,17 +1311,15 @@ "items": [ "api/quickstart", { - - } - ] - }, - "contribute-to-rancher", - { - "type": "category", - "label": "API", - "items": [ + "type": "category", + "label": "Example Workflows", + "items": [ + "api/workflows/projects" + ] + }, "api/api-reference" ] - } + }, + "contribute-to-rancher" ] } From 284b6ddb9068fb693ebaa937c2fa3f3cdd7ae5d3 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Tue, 21 Nov 2023 14:27:41 -0800 Subject: [PATCH 39/65] Update versioned_docs/version-2.8/api/workflows/projects.md Co-authored-by: Marty Hernandez Avedon --- versioned_docs/version-2.8/api/workflows/projects.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/versioned_docs/version-2.8/api/workflows/projects.md b/versioned_docs/version-2.8/api/workflows/projects.md index 9af2f25cbe97..3fce9dfa7534 100644 --- a/versioned_docs/version-2.8/api/workflows/projects.md +++ b/versioned_docs/version-2.8/api/workflows/projects.md @@ -25,7 +25,7 @@ Use `metadata.generateName` to ensure a unique project ID, but note that `kubect Set `metadata.namespace` and `spec.clusterName` to the ID for the cluster the project belongs to. -### Creating a Project With Resource Quota +### Creating a Project With a Resource Quota Refer to [Kubernetes Resource Quota](https://kubernetes.io/docs/concepts/policy/resource-quotas/). From b447e915359d7cfacb0f8892e53bef092496515d Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Wed, 22 Nov 2023 11:07:46 -0500 Subject: [PATCH 40/65] #963 clarify documentation for read only permissions in monitoring UI (#964) * 963 - Clarify documentation around read-only permissions for monitoring. * fixed random Capitalization of Nouns * moved changes to v2.8 dir and revised capitalization again * Update docs/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md * copyedits, added workaround * copyedits * versioning --- .../monitoring-and-alerting/rbac-for-monitoring.md | 4 ++-- .../monitoring-and-alerting/rbac-for-monitoring.md | 4 ++-- .../monitoring-and-alerting/rbac-for-monitoring.md | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md b/docs/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md index 7993c1040c6d..61c9165a54dd 100644 --- a/docs/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md +++ b/docs/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md @@ -197,7 +197,7 @@ The relationship between the default roles deployed by Rancher (i.e. cluster-own | project-owner | admin | monitoring-admin | RoleBinding within Project namespace | | project-member | edit | monitoring-edit | RoleBinding within Project namespace | -In addition to these default Roles, the following additional Rancher project roles can be applied to members of your Cluster to provide additional access to Monitoring. These Rancher Roles will be tied to ClusterRoles deployed by the Monitoring chart: +In addition to these default roles, the following Rancher project roles can be applied to members of your cluster to provide access to monitoring. These Rancher roles are tied to ClusterRoles deployed by the monitoring chart:
Non-default Rancher Permissions and Corresponding Kubernetes ClusterRoles
@@ -205,7 +205,7 @@ In addition to these default Roles, the following additional Rancher project rol |--------------------------|-------------------------------|-------|------| | View Monitoring* | [monitoring-ui-view](#monitoring-ui-view) | 2.4.8+ | 9.4.204+ | -\* A User bound to the **View Monitoring** Rancher Role only has permissions to access external Monitoring UIs if provided links to those UIs. In order to access the Monitoring Pane to get those links, the User must be a Project Member of at least one Project. +\* A user bound to the **View Monitoring** Rancher role and read-only project permissions can't view links in the monitoring UI. They can still access external monitoring UIs if provided links to those UIs. If you wish to grant access to users with the **View Monitoring** role and read-only project permissions, move the `cattle-monitoring-system` namespace into the project. ### Differences in 2.5.x diff --git a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md index 7993c1040c6d..61c9165a54dd 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md @@ -197,7 +197,7 @@ The relationship between the default roles deployed by Rancher (i.e. cluster-own | project-owner | admin | monitoring-admin | RoleBinding within Project namespace | | project-member | edit | monitoring-edit | RoleBinding within Project namespace | -In addition to these default Roles, the following additional Rancher project roles can be applied to members of your Cluster to provide additional access to Monitoring. These Rancher Roles will be tied to ClusterRoles deployed by the Monitoring chart: +In addition to these default roles, the following Rancher project roles can be applied to members of your cluster to provide access to monitoring. These Rancher roles are tied to ClusterRoles deployed by the monitoring chart:
Non-default Rancher Permissions and Corresponding Kubernetes ClusterRoles
@@ -205,7 +205,7 @@ In addition to these default Roles, the following additional Rancher project rol |--------------------------|-------------------------------|-------|------| | View Monitoring* | [monitoring-ui-view](#monitoring-ui-view) | 2.4.8+ | 9.4.204+ | -\* A User bound to the **View Monitoring** Rancher Role only has permissions to access external Monitoring UIs if provided links to those UIs. In order to access the Monitoring Pane to get those links, the User must be a Project Member of at least one Project. +\* A user bound to the **View Monitoring** Rancher role and read-only project permissions can't view links in the monitoring UI. They can still access external monitoring UIs if provided links to those UIs. If you wish to grant access to users with the **View Monitoring** role and read-only project permissions, move the `cattle-monitoring-system` namespace into the project. ### Differences in 2.5.x diff --git a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md index 7993c1040c6d..8a24fbd277d7 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md @@ -197,7 +197,7 @@ The relationship between the default roles deployed by Rancher (i.e. cluster-own | project-owner | admin | monitoring-admin | RoleBinding within Project namespace | | project-member | edit | monitoring-edit | RoleBinding within Project namespace | -In addition to these default Roles, the following additional Rancher project roles can be applied to members of your Cluster to provide additional access to Monitoring. These Rancher Roles will be tied to ClusterRoles deployed by the Monitoring chart: +In addition to these default roles, the following Rancher project roles can be applied to members of your cluster to provide access to monitoring. These Rancher roles are tied to ClusterRoles deployed by the monitoring chart:
Non-default Rancher Permissions and Corresponding Kubernetes ClusterRoles
@@ -205,7 +205,7 @@ In addition to these default Roles, the following additional Rancher project rol |--------------------------|-------------------------------|-------|------| | View Monitoring* | [monitoring-ui-view](#monitoring-ui-view) | 2.4.8+ | 9.4.204+ | -\* A User bound to the **View Monitoring** Rancher Role only has permissions to access external Monitoring UIs if provided links to those UIs. In order to access the Monitoring Pane to get those links, the User must be a Project Member of at least one Project. +\* A user bound to the **View Monitoring** Rancher role and read-only project permissions can't view links in the monitoring UI. They can still access external monitoring UIs if provided links to those UIs. If you wish to grant access to users with the **View Monitoring** role and read-only project permissions, move the `cattle-monitoring-system` namespace into the project. ### Differences in 2.5.x From e76ed17b4da8140dee40b921f1b866b0cee850ab Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 22 Nov 2023 11:55:24 -0800 Subject: [PATCH 41/65] Update versioned_docs/version-2.8/api/workflows/projects.md --- versioned_docs/version-2.8/api/workflows/projects.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/versioned_docs/version-2.8/api/workflows/projects.md b/versioned_docs/version-2.8/api/workflows/projects.md index 3fce9dfa7534..ddc2f8c5aae7 100644 --- a/versioned_docs/version-2.8/api/workflows/projects.md +++ b/versioned_docs/version-2.8/api/workflows/projects.md @@ -96,7 +96,7 @@ Note the format, `:`. ## Deleting a Project -Look up the project to delete in the cluster namespace since it generated using `metadata.generateName`: +Look up the project to delete in the cluster namespace: ```bash kubectl --namespace c-m-abcde get projects From 0b3fceefdce5f0f2a611d622b99f480371845c71 Mon Sep 17 00:00:00 2001 From: Denise Date: Tue, 28 Nov 2023 10:51:30 -0800 Subject: [PATCH 42/65] Update migrate-rancher-to-new-cluster.md (#1007) * Update migrate-rancher-to-new-cluster.md * Update docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md * Update docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md * Update docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md * 2.7, 2.8 versioning * restoring bulleted links to rke/k3s * 2.5 to 2.8 synced * rm'd info about rke2/k3s from inappropriate version * reverted last commit -- done by mistake --------- Co-authored-by: Marty Hernandez Avedon --- .../migrate-rancher-to-new-cluster.md | 8 ++++++-- .../migrate-rancher-to-new-cluster.md | 4 +++- .../migrate-rancher-to-new-cluster.md | 4 +++- .../migrate-rancher-to-new-cluster.md | 8 ++++++-- .../migrate-rancher-to-new-cluster.md | 8 ++++++-- 5 files changed, 24 insertions(+), 8 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md b/docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md index 777f7e6f34f7..978d15a6ea4e 100644 --- a/docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md +++ b/docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md @@ -8,9 +8,10 @@ title: Migrating Rancher to a New Cluster If you are migrating Rancher to a new Kubernetes cluster, you don't need to install Rancher on the new cluster first. If Rancher is restored to a new cluster with Rancher already installed, it can cause problems. + ### Prerequisites -These instructions assume you have [created a backup](back-up-rancher.md) and you have already installed a new Kubernetes cluster where Rancher will be deployed. +These instructions assume that you have [created a backup](back-up-rancher.md) and already installed a new Kubernetes cluster where Rancher will be deployed. The backup is specific to the Rancher application and can only migrate the Rancher application. :::caution @@ -25,6 +26,9 @@ Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes - [RKE Kubernetes installation docs](https://rancher.com/docs/rke/latest/en/installation/) - [K3s Kubernetes installation docs](https://rancher.com/docs/k3s/latest/en/installation/) +Since Rancher can be installed on any Kubernetes cluster, you can use this backup and restore method to migrate Rancher from one Kubernetes cluster to any other Kubernetes cluster. This method *only* migrates Rancher-related resources and won't affect other applications on the cluster. Refer to the [support matrix](https://www.suse.com/lifecycle/) to identify which Kubernetes cluster types and versions are supported for your Rancher version. + + ### 1. Install the rancher-backup Helm chart Install the [rancher-backup chart](https://github.com/rancher/backup-restore-operator/tags), using a version in the 2.x.x major version range: @@ -185,4 +189,4 @@ These values can be reused using the `rancher-values.yaml` file. Be sure to swit helm install rancher rancher-latest/rancher -n cattle-system -f rancher-values.yaml --version x.y.z ``` -::: \ No newline at end of file +::: diff --git a/versioned_docs/version-2.5/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md b/versioned_docs/version-2.5/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md index 29203a983d6b..5439d1e1fcbe 100644 --- a/versioned_docs/version-2.5/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md +++ b/versioned_docs/version-2.5/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md @@ -10,7 +10,7 @@ If you are migrating Rancher to a new Kubernetes cluster, you don't need to inst ### Prerequisites -These instructions assume you have [created a backup](back-up-rancher.md) and you have already installed a new Kubernetes cluster where Rancher will be deployed. +These instructions assume that you have [created a backup](back-up-rancher.md) and already installed a new Kubernetes cluster where Rancher will be deployed. The backup is specific to the Rancher application and can only migrate the Rancher application. It is required to use the same hostname that was set as the server URL in the first cluster. @@ -21,6 +21,8 @@ Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes - [RKE Kubernetes installation docs](https://rancher.com/docs/rke/latest/en/installation/) - [K3s Kubernetes installation docs](https://rancher.com/docs/k3s/latest/en/installation/) +Since Rancher can be installed on any Kubernetes cluster, you can use this backup and restore method to migrate Rancher from one Kubernetes cluster to any other Kubernetes cluster. This method *only* migrates Rancher-related resources and won't affect other applications on the cluster. Refer to the [support matrix](https://www.suse.com/lifecycle/) to identify which Kubernetes cluster types and versions are supported for your Rancher version. + ### 1. Install the rancher-backup Helm chart Install version 1.x.x of the rancher-backup chart. The following assumes a connected environment with access to DockerHub: diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md index f9ab1e5e9723..72b345cec03c 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md @@ -10,7 +10,7 @@ If you are migrating Rancher to a new Kubernetes cluster, you don't need to inst ### Prerequisites -These instructions assume you have [created a backup](back-up-rancher.md) and you have already installed a new Kubernetes cluster where Rancher will be deployed. +These instructions assume that you have [created a backup](back-up-rancher.md) and already installed a new Kubernetes cluster where Rancher will be deployed. The backup is specific to the Rancher application and can only migrate the Rancher application. :::caution @@ -25,6 +25,8 @@ Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes - [RKE Kubernetes installation docs](https://rancher.com/docs/rke/latest/en/installation/) - [K3s Kubernetes installation docs](https://rancher.com/docs/k3s/latest/en/installation/) +Since Rancher can be installed on any Kubernetes cluster, you can use this backup and restore method to migrate Rancher from one Kubernetes cluster to any other Kubernetes cluster. This method *only* migrates Rancher-related resources and won't affect other applications on the cluster. Refer to the [support matrix](https://www.suse.com/lifecycle/) to identify which Kubernetes cluster types and versions are supported for your Rancher version. + ### 1. Install the rancher-backup Helm chart Install the [rancher-backup chart](https://github.com/rancher/backup-restore-operator/tags), using a version in the 2.x.x major version range: diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md index 777f7e6f34f7..978d15a6ea4e 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md @@ -8,9 +8,10 @@ title: Migrating Rancher to a New Cluster If you are migrating Rancher to a new Kubernetes cluster, you don't need to install Rancher on the new cluster first. If Rancher is restored to a new cluster with Rancher already installed, it can cause problems. + ### Prerequisites -These instructions assume you have [created a backup](back-up-rancher.md) and you have already installed a new Kubernetes cluster where Rancher will be deployed. +These instructions assume that you have [created a backup](back-up-rancher.md) and already installed a new Kubernetes cluster where Rancher will be deployed. The backup is specific to the Rancher application and can only migrate the Rancher application. :::caution @@ -25,6 +26,9 @@ Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes - [RKE Kubernetes installation docs](https://rancher.com/docs/rke/latest/en/installation/) - [K3s Kubernetes installation docs](https://rancher.com/docs/k3s/latest/en/installation/) +Since Rancher can be installed on any Kubernetes cluster, you can use this backup and restore method to migrate Rancher from one Kubernetes cluster to any other Kubernetes cluster. This method *only* migrates Rancher-related resources and won't affect other applications on the cluster. Refer to the [support matrix](https://www.suse.com/lifecycle/) to identify which Kubernetes cluster types and versions are supported for your Rancher version. + + ### 1. Install the rancher-backup Helm chart Install the [rancher-backup chart](https://github.com/rancher/backup-restore-operator/tags), using a version in the 2.x.x major version range: @@ -185,4 +189,4 @@ These values can be reused using the `rancher-values.yaml` file. Be sure to swit helm install rancher rancher-latest/rancher -n cattle-system -f rancher-values.yaml --version x.y.z ``` -::: \ No newline at end of file +::: diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md index 777f7e6f34f7..978d15a6ea4e 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md @@ -8,9 +8,10 @@ title: Migrating Rancher to a New Cluster If you are migrating Rancher to a new Kubernetes cluster, you don't need to install Rancher on the new cluster first. If Rancher is restored to a new cluster with Rancher already installed, it can cause problems. + ### Prerequisites -These instructions assume you have [created a backup](back-up-rancher.md) and you have already installed a new Kubernetes cluster where Rancher will be deployed. +These instructions assume that you have [created a backup](back-up-rancher.md) and already installed a new Kubernetes cluster where Rancher will be deployed. The backup is specific to the Rancher application and can only migrate the Rancher application. :::caution @@ -25,6 +26,9 @@ Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes - [RKE Kubernetes installation docs](https://rancher.com/docs/rke/latest/en/installation/) - [K3s Kubernetes installation docs](https://rancher.com/docs/k3s/latest/en/installation/) +Since Rancher can be installed on any Kubernetes cluster, you can use this backup and restore method to migrate Rancher from one Kubernetes cluster to any other Kubernetes cluster. This method *only* migrates Rancher-related resources and won't affect other applications on the cluster. Refer to the [support matrix](https://www.suse.com/lifecycle/) to identify which Kubernetes cluster types and versions are supported for your Rancher version. + + ### 1. Install the rancher-backup Helm chart Install the [rancher-backup chart](https://github.com/rancher/backup-restore-operator/tags), using a version in the 2.x.x major version range: @@ -185,4 +189,4 @@ These values can be reused using the `rancher-values.yaml` file. Be sure to swit helm install rancher rancher-latest/rancher -n cattle-system -f rancher-values.yaml --version x.y.z ``` -::: \ No newline at end of file +::: From 4b2f8125bb130d70dc0e42799f53d4e662d7488a Mon Sep 17 00:00:00 2001 From: "[yzeng25]" <[yzeng25@wisc.edu]> Date: Wed, 29 Nov 2023 19:53:44 +0800 Subject: [PATCH 43/65] fix: update v2.7 cn sidebar --- i18n/zh/docusaurus-plugin-content-docs/version-2.7.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.7.json b/i18n/zh/docusaurus-plugin-content-docs/version-2.7.json index be3c0a57b11d..0e8a20ddd226 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.7.json +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.7.json @@ -15,9 +15,9 @@ "message": "部署 Rancher", "description": "The label for category Deploy Rancher in sidebar tutorialSidebar" }, - "sidebar.tutorialSidebar.category.Deploy Rancher Workloads": { + "sidebar.tutorialSidebar.category.Deploy Workloads": { "message": "部署 Rancher 工作负载", - "description": "The label for category Deploy Rancher Workloads in sidebar tutorialSidebar" + "description": "The label for category Deploy Workloads in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Installation and Upgrade": { "message": "安装和升级", From 07c03a2bd05529b2b6df2422d3cb6a7ce952c150 Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Wed, 29 Nov 2023 16:03:37 -0500 Subject: [PATCH 44/65] fixed typo in command (#1010) --- .../rancher-security/rancher-webhook-hardening.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-webhook-hardening.md b/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-webhook-hardening.md index 1f771dfdfe5d..0362deecc5cd 100644 --- a/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-webhook-hardening.md +++ b/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-webhook-hardening.md @@ -127,7 +127,7 @@ The webhook should only accept requests from the Kubernetes API server. By defau 6. Create a configmap in the `cattle-system` namespace on the provisioned cluster with these values: ``` - kubectl --namespace cattle-system create configmap --from-file=rancher-webhook=values.yaml + kubectl --namespace cattle-system create configmap rancher-config --from-file=rancher-webhook=values.yaml ``` The webhook will restart with these values. From e4b5db6fe4ae16d56550604af8a1944e15fe2f46 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 29 Nov 2023 13:29:35 -0800 Subject: [PATCH 45/65] Apply 4b2f8125 (sidebar label update) to other versions. Originally changed in #926 --- i18n/zh/docusaurus-plugin-content-docs/current.json | 4 ++-- i18n/zh/docusaurus-plugin-content-docs/version-2.0-2.4.json | 6 +++--- i18n/zh/docusaurus-plugin-content-docs/version-2.5.json | 6 +++--- i18n/zh/docusaurus-plugin-content-docs/version-2.6.json | 4 ++-- versioned_sidebars/version-2.0-2.4-sidebars.json | 2 +- versioned_sidebars/version-2.5-sidebars.json | 2 +- versioned_sidebars/version-2.6-sidebars.json | 2 +- versioned_sidebars/version-2.8-sidebars.json | 2 +- 8 files changed, 14 insertions(+), 14 deletions(-) diff --git a/i18n/zh/docusaurus-plugin-content-docs/current.json b/i18n/zh/docusaurus-plugin-content-docs/current.json index eda6706d575c..4995d0e0fecf 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current.json +++ b/i18n/zh/docusaurus-plugin-content-docs/current.json @@ -15,9 +15,9 @@ "message": "部署 Rancher", "description": "The label for category Deploy Rancher in sidebar tutorialSidebar" }, - "sidebar.tutorialSidebar.category.Deploy Rancher Workloads": { + "sidebar.tutorialSidebar.category.Deploy Workloads": { "message": "部署 Rancher 工作负载", - "description": "The label for category Deploy Rancher Workloads in sidebar tutorialSidebar" + "description": "The label for category Deploy Workloads in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Installation and Upgrade": { "message": "安装和升级", diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.0-2.4.json b/i18n/zh/docusaurus-plugin-content-docs/version-2.0-2.4.json index bb5510705eb3..d43eacb7979c 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.0-2.4.json +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.0-2.4.json @@ -19,9 +19,9 @@ "message": "Deploy Rancher", "description": "The label for category Deploy Rancher in sidebar tutorialSidebar" }, - "sidebar.tutorialSidebar.category.Deploy Rancher Workloads": { - "message": "Deploy Rancher Workloads", - "description": "The label for category Deploy Rancher Workloads in sidebar tutorialSidebar" + "sidebar.tutorialSidebar.category.Deploy Workloads": { + "message": "Deploy Workloads", + "description": "The label for category Deploy Workloads in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Installation and Upgrade": { "message": "Installation and Upgrade", diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.5.json b/i18n/zh/docusaurus-plugin-content-docs/version-2.5.json index e5408e6ab91e..403c10a755e1 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.5.json +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.5.json @@ -19,9 +19,9 @@ "message": "Deploy Rancher", "description": "The label for category Deploy Rancher in sidebar tutorialSidebar" }, - "sidebar.tutorialSidebar.category.Deploy Rancher Workloads": { - "message": "Deploy Rancher Workloads", - "description": "The label for category Deploy Rancher Workloads in sidebar tutorialSidebar" + "sidebar.tutorialSidebar.category.Deploy Workloads": { + "message": "Deploy Workloads", + "description": "The label for category Deploy Workloads in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Installation and Upgrade": { "message": "Installation and Upgrade", diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6.json b/i18n/zh/docusaurus-plugin-content-docs/version-2.6.json index 9ca3c8292298..7d78c7898f4d 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6.json +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.6.json @@ -15,9 +15,9 @@ "message": "部署 Rancher", "description": "The label for category Deploy Rancher in sidebar tutorialSidebar" }, - "sidebar.tutorialSidebar.category.Deploy Rancher Workloads": { + "sidebar.tutorialSidebar.category.Deploy Workloads": { "message": "部署 Rancher 工作负载", - "description": "The label for category Deploy Rancher Workloads in sidebar tutorialSidebar" + "description": "The label for category Deploy Workloads in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Installation and Upgrade": { "message": "安装和升级", diff --git a/versioned_sidebars/version-2.0-2.4-sidebars.json b/versioned_sidebars/version-2.0-2.4-sidebars.json index e4cd68fb8155..b9f17a2a94dc 100644 --- a/versioned_sidebars/version-2.0-2.4-sidebars.json +++ b/versioned_sidebars/version-2.0-2.4-sidebars.json @@ -48,7 +48,7 @@ }, { "type": "category", - "label": "Deploy Rancher Workloads", + "label": "Deploy Workloads", "link": { "type": "doc", "id": "pages-for-subheaders/deploy-rancher-workloads" diff --git a/versioned_sidebars/version-2.5-sidebars.json b/versioned_sidebars/version-2.5-sidebars.json index 4579266caa8c..e4ac306ebffd 100644 --- a/versioned_sidebars/version-2.5-sidebars.json +++ b/versioned_sidebars/version-2.5-sidebars.json @@ -47,7 +47,7 @@ }, { "type": "category", - "label": "Deploy Rancher Workloads", + "label": "Deploy Workloads", "link": { "type": "doc", "id": "pages-for-subheaders/deploy-rancher-workloads" diff --git a/versioned_sidebars/version-2.6-sidebars.json b/versioned_sidebars/version-2.6-sidebars.json index 366b1dc5f296..e813a1ac7277 100644 --- a/versioned_sidebars/version-2.6-sidebars.json +++ b/versioned_sidebars/version-2.6-sidebars.json @@ -36,7 +36,7 @@ }, { "type": "category", - "label": "Deploy Rancher Workloads", + "label": "Deploy Workloads", "link": { "type": "doc", "id": "pages-for-subheaders/deploy-rancher-workloads" diff --git a/versioned_sidebars/version-2.8-sidebars.json b/versioned_sidebars/version-2.8-sidebars.json index d29e846ed9a0..c75433939841 100644 --- a/versioned_sidebars/version-2.8-sidebars.json +++ b/versioned_sidebars/version-2.8-sidebars.json @@ -37,7 +37,7 @@ "getting-started/quick-start-guides/deploy-rancher-manager/prime", { "type": "category", - "label": "Deploy Rancher Workloads", + "label": "Deploy Workloads", "link": { "type": "doc", "id": "pages-for-subheaders/deploy-rancher-workloads" From 5ae50afe6821317a7bdea66a36e7e55abae2db72 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 29 Nov 2023 13:35:19 -0800 Subject: [PATCH 46/65] Remove term from translated sidebar label to match English label --- i18n/zh/docusaurus-plugin-content-docs/current.json | 2 +- i18n/zh/docusaurus-plugin-content-docs/version-2.6.json | 2 +- i18n/zh/docusaurus-plugin-content-docs/version-2.7.json | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/i18n/zh/docusaurus-plugin-content-docs/current.json b/i18n/zh/docusaurus-plugin-content-docs/current.json index 4995d0e0fecf..dbdfd9aa227e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current.json +++ b/i18n/zh/docusaurus-plugin-content-docs/current.json @@ -16,7 +16,7 @@ "description": "The label for category Deploy Rancher in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Deploy Workloads": { - "message": "部署 Rancher 工作负载", + "message": "部署工作负载", "description": "The label for category Deploy Workloads in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Installation and Upgrade": { diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6.json b/i18n/zh/docusaurus-plugin-content-docs/version-2.6.json index 7d78c7898f4d..9fc315cd1c8a 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6.json +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.6.json @@ -16,7 +16,7 @@ "description": "The label for category Deploy Rancher in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Deploy Workloads": { - "message": "部署 Rancher 工作负载", + "message": "部署工作负载", "description": "The label for category Deploy Workloads in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Installation and Upgrade": { diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.7.json b/i18n/zh/docusaurus-plugin-content-docs/version-2.7.json index 0e8a20ddd226..0a8cf301cf7a 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.7.json +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.7.json @@ -16,7 +16,7 @@ "description": "The label for category Deploy Rancher in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Deploy Workloads": { - "message": "部署 Rancher 工作负载", + "message": "部署工作负载", "description": "The label for category Deploy Workloads in sidebar tutorialSidebar" }, "sidebar.tutorialSidebar.category.Installation and Upgrade": { From 662afc641b536bbcaa107a669180a7d65333281e Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Thu, 30 Nov 2023 10:10:34 -0500 Subject: [PATCH 47/65] #420 Completes canonical links task (#1011) * Completes canonical links task * spacing --- .../upgrade-a-hardened-cluster-to-k8s-v1-25.md | 4 ++++ .../installation-requirements/dockershim.md | 4 ++++ .../air-gapped-helm-cli-install/docker-install-commands.md | 4 ++++ .../infrastructure-private-registry.md | 4 ++++ .../quick-start-guides/deploy-rancher-manager/prime.md | 4 ++++ .../cis-scans/configuration-reference.md | 4 ++++ docs/integrations-in-rancher/cis-scans/custom-benchmark.md | 4 ++++ docs/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md | 4 ++++ .../cis-scans/skipped-and-not-applicable-tests.md | 4 ++++ .../aws-cloud-marketplace/adapter-requirements.md | 4 ++++ .../cloud-marketplace/aws-cloud-marketplace/common-issues.md | 4 ++++ .../aws-cloud-marketplace/uninstall-adapter.md | 4 ++++ .../cloud-marketplace/supportconfig.md | 4 ++++ .../fleet-gitops-at-scale/architecture.md | 4 ++++ .../fleet-gitops-at-scale/use-fleet-behind-a-proxy.md | 4 ++++ .../fleet-gitops-at-scale/windows-support.md | 4 ++++ docs/integrations-in-rancher/harvester.md | 4 ++++ .../configuration-options/install-istio-on-rke2-cluster.md | 4 ++++ .../istio/configuration-options/pod-security-policies.md | 4 ++++ .../istio/configuration-options/project-network-isolation.md | 4 ++++ .../selectors-and-scrape-configurations.md | 4 ++++ .../istio/cpu-and-memory-allocations.md | 4 ++++ docs/integrations-in-rancher/istio/disable-istio.md | 4 ++++ docs/integrations-in-rancher/istio/rbac-for-istio.md | 4 ++++ .../custom-resource-configuration/flows-and-clusterflows.md | 4 ++++ .../outputs-and-clusteroutputs.md | 4 ++++ docs/integrations-in-rancher/logging/logging-architecture.md | 4 ++++ .../logging/logging-helm-chart-options.md | 4 ++++ docs/integrations-in-rancher/logging/rbac-for-logging.md | 4 ++++ .../logging/taints-and-tolerations.md | 4 ++++ docs/integrations-in-rancher/longhorn.md | 4 ++++ .../monitoring-and-alerting/built-in-dashboards.md | 4 ++++ .../monitoring-and-alerting/how-monitoring-works.md | 4 ++++ .../monitoring-and-alerting/promql-expressions.md | 4 ++++ .../monitoring-and-alerting/rbac-for-monitoring.md | 5 +++++ .../monitoring-and-alerting/windows-support.md | 4 ++++ docs/integrations-in-rancher/neuvector.md | 4 ++++ docs/integrations-in-rancher/opa-gatekeeper.md | 4 ++++ docs/integrations-in-rancher/rancher-extensions.md | 4 ++++ docs/reference-guides/kubernetes-concepts.md | 4 ++++ docs/reference-guides/rancher-cluster-tools.md | 4 ++++ docs/reference-guides/rancher-project-tools.md | 4 ++++ docs/reference-guides/rke1-template-example-yaml.md | 4 ++++ docs/reference-guides/system-tools.md | 4 ++++ .../cis-scans/skipped-and-not-applicable-tests.md | 4 ++++ .../istio/cpu-and-memory-allocations.md | 4 ++++ .../integrations-in-rancher/istio/disable-istio.md | 4 ++++ .../integrations-in-rancher/istio/rbac-for-istio.md | 4 ++++ .../explanations/integrations-in-rancher/opa-gatekeeper.md | 4 ++++ .../infrastructure-private-registry.md | 4 ++++ .../version-2.0-2.4/reference-guides/kubernetes-concepts.md | 4 ++++ .../reference-guides/rancher-cluster-tools.md | 4 ++++ .../reference-guides/rke1-template-example-yaml.md | 4 ++++ .../version-2.0-2.4/reference-guides/system-tools.md | 4 ++++ .../cis-scans/configuration-reference.md | 4 ++++ .../integrations-in-rancher/cis-scans/custom-benchmark.md | 4 ++++ .../integrations-in-rancher/cis-scans/rbac-for-cis-scans.md | 4 ++++ .../cis-scans/skipped-and-not-applicable-tests.md | 4 ++++ .../fleet-gitops-at-scale/architecture.md | 4 ++++ .../fleet-gitops-at-scale/use-fleet-behind-a-proxy.md | 4 ++++ .../fleet-gitops-at-scale/windows-support.md | 4 ++++ .../configuration-options/install-istio-on-rke2-cluster.md | 4 ++++ .../istio/configuration-options/pod-security-policies.md | 4 ++++ .../istio/configuration-options/project-network-isolation.md | 4 ++++ .../selectors-and-scrape-configurations.md | 4 ++++ .../istio/cpu-and-memory-allocations.md | 4 ++++ .../integrations-in-rancher/istio/disable-istio.md | 4 ++++ .../integrations-in-rancher/istio/rbac-for-istio.md | 4 ++++ .../custom-resource-configuration/flows-and-clusterflows.md | 4 ++++ .../outputs-and-clusteroutputs.md | 4 ++++ .../integrations-in-rancher/logging/logging-architecture.md | 4 ++++ .../logging/logging-helm-chart-options.md | 4 ++++ .../integrations-in-rancher/logging/rbac-for-logging.md | 4 ++++ .../logging/taints-and-tolerations.md | 4 ++++ .../monitoring-and-alerting/built-in-dashboards.md | 4 ++++ .../monitoring-and-alerting/how-monitoring-works.md | 4 ++++ .../monitoring-and-alerting/promql-expressions.md | 4 ++++ .../monitoring-and-alerting/rbac-for-monitoring.md | 4 ++++ .../monitoring-and-alerting/windows-support.md | 4 ++++ .../explanations/integrations-in-rancher/opa-gatekeeper.md | 4 ++++ .../infrastructure-private-registry.md | 4 ++++ .../version-2.5/reference-guides/kubernetes-concepts.md | 4 ++++ .../version-2.5/reference-guides/rancher-cluster-tools.md | 4 ++++ .../version-2.5/reference-guides/rancher-project-tools.md | 4 ++++ .../reference-guides/rke1-template-example-yaml.md | 4 ++++ versioned_docs/version-2.5/reference-guides/system-tools.md | 4 ++++ .../installation-requirements/dockershim.md | 4 ++++ .../infrastructure-private-registry.md | 4 ++++ .../cis-scans/configuration-reference.md | 4 ++++ .../integrations-in-rancher/cis-scans/custom-benchmark.md | 4 ++++ .../integrations-in-rancher/cis-scans/rbac-for-cis-scans.md | 4 ++++ .../cis-scans/skipped-and-not-applicable-tests.md | 4 ++++ .../aws-cloud-marketplace/adapter-requirements.md | 4 ++++ .../cloud-marketplace/aws-cloud-marketplace/common-issues.md | 4 ++++ .../aws-cloud-marketplace/uninstall-adapter.md | 4 ++++ .../cloud-marketplace/supportconfig.md | 4 ++++ .../fleet-gitops-at-scale/architecture.md | 4 ++++ .../fleet-gitops-at-scale/use-fleet-behind-a-proxy.md | 4 ++++ .../fleet-gitops-at-scale/windows-support.md | 4 ++++ .../version-2.6/integrations-in-rancher/harvester.md | 4 ++++ .../configuration-options/install-istio-on-rke2-cluster.md | 4 ++++ .../istio/configuration-options/pod-security-policies.md | 4 ++++ .../istio/configuration-options/project-network-isolation.md | 4 ++++ .../selectors-and-scrape-configurations.md | 4 ++++ .../istio/cpu-and-memory-allocations.md | 4 ++++ .../integrations-in-rancher/istio/disable-istio.md | 4 ++++ .../integrations-in-rancher/istio/rbac-for-istio.md | 4 ++++ .../custom-resource-configuration/flows-and-clusterflows.md | 4 ++++ .../outputs-and-clusteroutputs.md | 4 ++++ .../integrations-in-rancher/logging/logging-architecture.md | 4 ++++ .../logging/logging-helm-chart-options.md | 4 ++++ .../integrations-in-rancher/logging/rbac-for-logging.md | 4 ++++ .../logging/taints-and-tolerations.md | 4 ++++ .../monitoring-and-alerting/built-in-dashboards.md | 4 ++++ .../monitoring-and-alerting/how-monitoring-works.md | 4 ++++ .../monitoring-and-alerting/promql-expressions.md | 4 ++++ .../monitoring-and-alerting/rbac-for-monitoring.md | 5 +++++ .../monitoring-and-alerting/windows-support.md | 4 ++++ .../version-2.6/integrations-in-rancher/neuvector.md | 4 ++++ .../version-2.6/integrations-in-rancher/opa-gatekeeper.md | 4 ++++ .../version-2.6/reference-guides/kubernetes-concepts.md | 4 ++++ .../version-2.6/reference-guides/rancher-cluster-tools.md | 4 ++++ .../version-2.6/reference-guides/rancher-project-tools.md | 4 ++++ .../reference-guides/rke1-template-example-yaml.md | 4 ++++ versioned_docs/version-2.6/reference-guides/system-tools.md | 4 ++++ .../upgrade-a-hardened-cluster-to-k8s-v1-25.md | 4 ++++ .../installation-requirements/dockershim.md | 4 ++++ .../air-gapped-helm-cli-install/docker-install-commands.md | 4 ++++ .../infrastructure-private-registry.md | 4 ++++ .../quick-start-guides/deploy-rancher-manager/prime.md | 4 ++++ .../cis-scans/configuration-reference.md | 4 ++++ .../integrations-in-rancher/cis-scans/custom-benchmark.md | 4 ++++ .../integrations-in-rancher/cis-scans/rbac-for-cis-scans.md | 4 ++++ .../cis-scans/skipped-and-not-applicable-tests.md | 4 ++++ .../aws-cloud-marketplace/adapter-requirements.md | 4 ++++ .../cloud-marketplace/aws-cloud-marketplace/common-issues.md | 4 ++++ .../aws-cloud-marketplace/uninstall-adapter.md | 4 ++++ .../cloud-marketplace/supportconfig.md | 4 ++++ .../fleet-gitops-at-scale/architecture.md | 4 ++++ .../fleet-gitops-at-scale/use-fleet-behind-a-proxy.md | 4 ++++ .../fleet-gitops-at-scale/windows-support.md | 4 ++++ .../version-2.7/integrations-in-rancher/harvester.md | 4 ++++ .../configuration-options/install-istio-on-rke2-cluster.md | 4 ++++ .../istio/configuration-options/pod-security-policies.md | 4 ++++ .../istio/configuration-options/project-network-isolation.md | 4 ++++ .../selectors-and-scrape-configurations.md | 4 ++++ .../istio/cpu-and-memory-allocations.md | 4 ++++ .../integrations-in-rancher/istio/disable-istio.md | 4 ++++ .../integrations-in-rancher/istio/rbac-for-istio.md | 4 ++++ .../custom-resource-configuration/flows-and-clusterflows.md | 4 ++++ .../outputs-and-clusteroutputs.md | 4 ++++ .../integrations-in-rancher/logging/logging-architecture.md | 4 ++++ .../logging/logging-helm-chart-options.md | 4 ++++ .../integrations-in-rancher/logging/rbac-for-logging.md | 4 ++++ .../logging/taints-and-tolerations.md | 4 ++++ .../monitoring-and-alerting/built-in-dashboards.md | 4 ++++ .../monitoring-and-alerting/how-monitoring-works.md | 4 ++++ .../monitoring-and-alerting/promql-expressions.md | 4 ++++ .../monitoring-and-alerting/rbac-for-monitoring.md | 5 +++++ .../monitoring-and-alerting/windows-support.md | 4 ++++ .../version-2.7/integrations-in-rancher/neuvector.md | 4 ++++ .../version-2.7/integrations-in-rancher/opa-gatekeeper.md | 4 ++++ .../integrations-in-rancher/rancher-extensions.md | 4 ++++ .../version-2.7/reference-guides/kubernetes-concepts.md | 4 ++++ .../version-2.7/reference-guides/rancher-cluster-tools.md | 4 ++++ .../version-2.7/reference-guides/rancher-project-tools.md | 4 ++++ .../reference-guides/rke1-template-example-yaml.md | 4 ++++ versioned_docs/version-2.7/reference-guides/system-tools.md | 4 ++++ .../upgrade-a-hardened-cluster-to-k8s-v1-25.md | 4 ++++ .../installation-requirements/dockershim.md | 4 ++++ .../air-gapped-helm-cli-install/docker-install-commands.md | 4 ++++ .../infrastructure-private-registry.md | 4 ++++ .../quick-start-guides/deploy-rancher-manager/prime.md | 4 ++++ .../cis-scans/configuration-reference.md | 4 ++++ .../integrations-in-rancher/cis-scans/custom-benchmark.md | 4 ++++ .../integrations-in-rancher/cis-scans/rbac-for-cis-scans.md | 4 ++++ .../cis-scans/skipped-and-not-applicable-tests.md | 4 ++++ .../aws-cloud-marketplace/adapter-requirements.md | 4 ++++ .../cloud-marketplace/aws-cloud-marketplace/common-issues.md | 4 ++++ .../aws-cloud-marketplace/uninstall-adapter.md | 4 ++++ .../cloud-marketplace/supportconfig.md | 4 ++++ .../configuration-options/install-istio-on-rke2-cluster.md | 4 ++++ .../istio/configuration-options/pod-security-policies.md | 4 ++++ .../istio/configuration-options/project-network-isolation.md | 4 ++++ .../selectors-and-scrape-configurations.md | 4 ++++ .../istio/cpu-and-memory-allocations.md | 4 ++++ .../integrations-in-rancher/istio/disable-istio.md | 4 ++++ .../integrations-in-rancher/istio/rbac-for-istio.md | 4 ++++ .../custom-resource-configuration/flows-and-clusterflows.md | 4 ++++ .../outputs-and-clusteroutputs.md | 4 ++++ .../integrations-in-rancher/logging/logging-architecture.md | 4 ++++ .../logging/logging-helm-chart-options.md | 4 ++++ .../integrations-in-rancher/logging/rbac-for-logging.md | 4 ++++ .../logging/taints-and-tolerations.md | 4 ++++ .../monitoring-and-alerting/built-in-dashboards.md | 4 ++++ .../monitoring-and-alerting/how-monitoring-works.md | 4 ++++ .../monitoring-and-alerting/promql-expressions.md | 4 ++++ .../monitoring-and-alerting/rbac-for-monitoring.md | 5 +++++ .../monitoring-and-alerting/windows-support.md | 4 ++++ .../version-2.8/integrations-in-rancher/opa-gatekeeper.md | 4 ++++ .../integrations-in-rancher/rancher-extensions.md | 4 ++++ .../version-2.8/reference-guides/kubernetes-concepts.md | 4 ++++ .../version-2.8/reference-guides/rancher-cluster-tools.md | 4 ++++ .../version-2.8/reference-guides/rancher-project-tools.md | 4 ++++ .../reference-guides/rke1-template-example-yaml.md | 4 ++++ versioned_docs/version-2.8/reference-guides/system-tools.md | 4 ++++ 206 files changed, 828 insertions(+) diff --git a/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md b/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md index 4d276be3e238..4fead7e7330d 100644 --- a/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md +++ b/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md @@ -2,6 +2,10 @@ title: Upgrade a Hardened Custom/Imported Cluster to Kubernetes v1.25 --- + + + + Kubernetes v1.25 changes how clusters describe and implement security policies. From this version forward, [Pod Security Policies (PSPs)](https://kubernetes.io/docs/concepts/security/pod-security-policy/) are no longer available. Kubernetes v1.25 replaces them with new security objects: [Pod Security Standards (PSS)](https://kubernetes.io/docs/concepts/security/pod-security-standards/), and [Pod Security Admissions (PSAs)](https://kubernetes.io/docs/concepts/security/pod-security-admission/). If you have custom or imported hardened clusters, you must take special preparations to ensure that the upgrade from an earlier version of Kubernetes to v1.25 or later goes smoothly. diff --git a/docs/getting-started/installation-and-upgrade/installation-requirements/dockershim.md b/docs/getting-started/installation-and-upgrade/installation-requirements/dockershim.md index e215e0cfc2e6..211141cb7044 100644 --- a/docs/getting-started/installation-and-upgrade/installation-requirements/dockershim.md +++ b/docs/getting-started/installation-and-upgrade/installation-requirements/dockershim.md @@ -2,6 +2,10 @@ title: Dockershim --- + + + + The Dockershim is the CRI compliant layer between the Kubelet and the Docker daemon. As part of the Kubernetes 1.20 release, the [deprecation of the in-tree Dockershim was announced](https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/). For more information on the deprecation and its timelines, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed). RKE clusters now support the external Dockershim to continue leveraging Docker as the CRI runtime. We now implement the upstream open source community external Dockershim announced by [Mirantis and Docker](https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/) to ensure RKE clusters can continue to leverage Docker. diff --git a/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md b/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md index 3c7c49004432..53bbdc4e9cc7 100644 --- a/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md +++ b/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md @@ -2,6 +2,10 @@ title: Docker Install Commands --- + + + + The Docker installation is for Rancher users who want to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. diff --git a/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md b/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md index f90665820049..981223575fec 100644 --- a/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md +++ b/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md @@ -2,6 +2,10 @@ title: '1. Set up Infrastructure and Private Registry' --- + + + + In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private container image registry that must be available to your Rancher node(s). An air gapped environment is an environment where the Rancher server is installed offline or behind a firewall. diff --git a/docs/getting-started/quick-start-guides/deploy-rancher-manager/prime.md b/docs/getting-started/quick-start-guides/deploy-rancher-manager/prime.md index 6177d9e5fbd5..26700be8cf43 100644 --- a/docs/getting-started/quick-start-guides/deploy-rancher-manager/prime.md +++ b/docs/getting-started/quick-start-guides/deploy-rancher-manager/prime.md @@ -2,6 +2,10 @@ title: Rancher Prime --- + + + + Rancher v2.7 introduces Rancher Prime, an evolution of the Rancher enterprise offering. Rancher Prime is a new edition of the commercial, enterprise offering built on the the same source code. Rancher’s product will therefore continue to be 100% open source with additional value coming in from security assurances, extended lifecycles, access to focused architectures and Kubernetes advisories. Rancher Prime will also offer options to get production support for innovative Rancher projects. With Rancher Prime, installation assets are hosted on a trusted registry owned and managed by Rancher. To get started with Rancher Prime, [go to this page](https://www.rancher.com/quick-start) and fill out the form. diff --git a/docs/integrations-in-rancher/cis-scans/configuration-reference.md b/docs/integrations-in-rancher/cis-scans/configuration-reference.md index fa9012e6ed80..0403956be56b 100644 --- a/docs/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/docs/integrations-in-rancher/cis-scans/configuration-reference.md @@ -2,6 +2,10 @@ title: Configuration --- + + + + This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans, diff --git a/docs/integrations-in-rancher/cis-scans/custom-benchmark.md b/docs/integrations-in-rancher/cis-scans/custom-benchmark.md index 8ba63bbe8f7b..47853e45c147 100644 --- a/docs/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/docs/integrations-in-rancher/cis-scans/custom-benchmark.md @@ -2,6 +2,10 @@ title: Creating a Custom Benchmark Version for Running a Cluster Scan --- + + + + Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the
kube-bench tool. The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu. diff --git a/docs/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md b/docs/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md index 8a88240d963f..795e64cef29b 100644 --- a/docs/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md +++ b/docs/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md @@ -2,6 +2,10 @@ title: Roles-based Access Control --- + + + + This section describes the permissions required to use the rancher-cis-benchmark App. The rancher-cis-benchmark is a cluster-admin only feature by default. diff --git a/docs/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md b/docs/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md index b11887c76c1c..3920a1588c51 100644 --- a/docs/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md +++ b/docs/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md @@ -2,6 +2,10 @@ title: Skipped and Not Applicable Tests --- + + + + This section lists the tests that are skipped in the permissive test profile for RKE. > All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. diff --git a/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md b/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md index 42937ea22622..0116f4d9f630 100644 --- a/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md +++ b/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md @@ -2,6 +2,10 @@ title: Prerequisites --- + + + + ### 1. Setting Up License Manager and Purchasing Support First, complete the [first step](https://docs.aws.amazon.com/license-manager/latest/userguide/getting-started.html) of the license manager one-time setup. diff --git a/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md b/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md index 02e6bb86340a..c8f8e51cce76 100644 --- a/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md +++ b/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md @@ -2,6 +2,10 @@ title: Common Issues --- + + + + **After installing the adapter, a banner message appears in Rancher that says "AWS Marketplace Adapter: Unable to run the adapter, please check the adapter logs"** This error indicates that while the adapter was installed into the cluster, an error has occurred which prevents it from properly checking-in/checking-out licenses. diff --git a/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md b/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md index 16e0ac3443ef..51e11fe03606 100644 --- a/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md +++ b/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md @@ -2,6 +2,10 @@ title: Uninstalling The Adapter --- + + + + ### 1. Uninstall the adapter chart using helm. ```bash diff --git a/docs/integrations-in-rancher/cloud-marketplace/supportconfig.md b/docs/integrations-in-rancher/cloud-marketplace/supportconfig.md index 9d87830a82eb..6eecac1132a7 100644 --- a/docs/integrations-in-rancher/cloud-marketplace/supportconfig.md +++ b/docs/integrations-in-rancher/cloud-marketplace/supportconfig.md @@ -2,6 +2,10 @@ title: Supportconfig bundle --- + + + + After installing the CSP adapter, you will have the ability to generate a supportconfig bundle. This bundle is a tar file which can be used to quickly provide information to support. These bundles can be created through Rancher or through direct access to the cluster that Rancher is installed on. Note that accessing through Rancher is preferred. diff --git a/docs/integrations-in-rancher/fleet-gitops-at-scale/architecture.md b/docs/integrations-in-rancher/fleet-gitops-at-scale/architecture.md index f012a3a9921c..9d64e38de41b 100644 --- a/docs/integrations-in-rancher/fleet-gitops-at-scale/architecture.md +++ b/docs/integrations-in-rancher/fleet-gitops-at-scale/architecture.md @@ -2,6 +2,10 @@ title: Architecture --- + + + + Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, or Kustomize or any combination of the three. Regardless of the source, all resources are dynamically turned into Helm charts, and Helm is used as the engine to deploy everything in the cluster. This gives you a high degree of control, consistency, and auditability. Fleet focuses not only on the ability to scale, but to give one a high degree of control and visibility to exactly what is installed on the cluster. ![Architecture](/img/fleet-architecture.svg) diff --git a/docs/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md b/docs/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md index e6a3f8cf9618..6160a19672a3 100644 --- a/docs/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md +++ b/docs/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md @@ -2,6 +2,10 @@ title: Using Fleet Behind a Proxy --- + + + + In this section, you'll learn how to enable Fleet in a setup that has a Rancher server with a public IP a Kubernetes cluster that has no public IP, but is configured to use a proxy. Rancher does not establish connections with registered downstream clusters. The Rancher agent deployed on the downstream cluster must be able to establish the connection with Rancher. diff --git a/docs/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md b/docs/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md index aea98b74dbc0..f7bf04055f98 100644 --- a/docs/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md +++ b/docs/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md @@ -2,6 +2,10 @@ title: Windows Support --- + + + + Prior to Rancher v2.5.6, the `agent` did not have native Windows manifests on downstream clusters with Windows nodes. This would result in a failing `agent` pod for the cluster. If you are upgrading from an older version of Rancher to v2.5.6+, you can deploy a working `agent` with the following workflow *in the downstream cluster*: diff --git a/docs/integrations-in-rancher/harvester.md b/docs/integrations-in-rancher/harvester.md index 66fc9631970c..300a5826e162 100644 --- a/docs/integrations-in-rancher/harvester.md +++ b/docs/integrations-in-rancher/harvester.md @@ -2,6 +2,10 @@ title: Harvester Integration --- + + + + Introduced in Rancher v2.6.1, [Harvester](https://docs.harvesterhci.io/) is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require users to know Kubernetes concepts, making it a more user-friendly application. ### Feature Flag diff --git a/docs/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md b/docs/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md index 0c229f288b4a..5bec3737edf1 100644 --- a/docs/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md +++ b/docs/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md @@ -2,6 +2,10 @@ title: Additional Steps for Installing Istio on RKE2 and K3s Clusters --- + + + + When installing or upgrading the Istio Helm chart through **Apps,** 1. If you are installing the chart, click **Customize Helm options before install** and click **Next**. diff --git a/docs/integrations-in-rancher/istio/configuration-options/pod-security-policies.md b/docs/integrations-in-rancher/istio/configuration-options/pod-security-policies.md index e774b97bf8cd..b157cb46fd43 100644 --- a/docs/integrations-in-rancher/istio/configuration-options/pod-security-policies.md +++ b/docs/integrations-in-rancher/istio/configuration-options/pod-security-policies.md @@ -2,6 +2,10 @@ title: Enable Istio with Pod Security Policies --- + + + + If you have restrictive Pod Security Policies enabled, then Istio may not be able to function correctly, because it needs certain permissions in order to install itself and manage pod infrastructure. In this section, we will configure a cluster with PSPs enabled for an Istio install, and also set up the Istio CNI plugin. The Istio CNI plugin removes the need for each application pod to have a privileged `NET_ADMIN` container. For further information, see the [Istio CNI Plugin docs](https://istio.io/docs/setup/additional-setup/cni). Please note that the [Istio CNI Plugin is in alpha](https://istio.io/about/feature-stages/). diff --git a/docs/integrations-in-rancher/istio/configuration-options/project-network-isolation.md b/docs/integrations-in-rancher/istio/configuration-options/project-network-isolation.md index 16fde314183b..f51a033ce3e3 100644 --- a/docs/integrations-in-rancher/istio/configuration-options/project-network-isolation.md +++ b/docs/integrations-in-rancher/istio/configuration-options/project-network-isolation.md @@ -2,6 +2,10 @@ title: Additional Steps for Project Network Isolation --- + + + + In clusters where: - You are using the Canal network plugin with Rancher before v2.5.8, or you are using Rancher v2.5.8+ with an any RKE network plug-in that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin diff --git a/docs/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md b/docs/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md index 10907f4718cf..29b51149c972 100644 --- a/docs/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md +++ b/docs/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md @@ -2,6 +2,10 @@ title: Selectors and Scrape Configs --- + + + + The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false`, which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. diff --git a/docs/integrations-in-rancher/istio/cpu-and-memory-allocations.md b/docs/integrations-in-rancher/istio/cpu-and-memory-allocations.md index 37472e328c79..10fe77c9ec4b 100644 --- a/docs/integrations-in-rancher/istio/cpu-and-memory-allocations.md +++ b/docs/integrations-in-rancher/istio/cpu-and-memory-allocations.md @@ -2,6 +2,10 @@ title: CPU and Memory Allocations --- + + + + This section describes the minimum recommended computing resources for the Istio components in a cluster. The CPU and memory allocations for each component are [configurable.](#configuring-resource-allocations) diff --git a/docs/integrations-in-rancher/istio/disable-istio.md b/docs/integrations-in-rancher/istio/disable-istio.md index 052122c4891f..c5f0ae6ce003 100644 --- a/docs/integrations-in-rancher/istio/disable-istio.md +++ b/docs/integrations-in-rancher/istio/disable-istio.md @@ -2,6 +2,10 @@ title: Disabling Istio --- + + + + This section describes how to uninstall Istio in a cluster or disable a namespace, or workload. ## Uninstall Istio in a Cluster diff --git a/docs/integrations-in-rancher/istio/rbac-for-istio.md b/docs/integrations-in-rancher/istio/rbac-for-istio.md index e33bdb725403..b92096b6c9b5 100644 --- a/docs/integrations-in-rancher/istio/rbac-for-istio.md +++ b/docs/integrations-in-rancher/istio/rbac-for-istio.md @@ -2,6 +2,10 @@ title: Role-based Access Control --- + + + + This section describes the permissions required to access Istio features. The rancher istio chart installs three `ClusterRoles` diff --git a/docs/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md b/docs/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md index e4020351ab15..d6d2ccd67e27 100644 --- a/docs/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md +++ b/docs/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md @@ -2,6 +2,10 @@ title: Flows and ClusterFlows --- + + + + See the [Logging operator documentation](https://kube-logging.github.io/docs/configuration/flow/) for the full details on how to configure `Flows` and `ClusterFlows`. See [Rancher Integration with Logging Services: Troubleshooting](../../../pages-for-subheaders/logging.md#The-Logging-Buffer-Overloads-Pods) for how to resolve memory problems with the logging buffer. diff --git a/docs/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md b/docs/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md index aa558385bf02..3ae66c9145a5 100644 --- a/docs/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md +++ b/docs/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md @@ -2,6 +2,10 @@ title: Outputs and ClusterOutputs --- + + + + See the [Logging operator documentation](https://kube-logging.github.io/docs/configuration/flow/) for the full details on how to configure `Flows` and `ClusterFlows`. See [Rancher Integration with Logging Services: Troubleshooting](../../../pages-for-subheaders/logging.md#The-Logging-Buffer-Overloads-Pods) for how to resolve memory problems with the logging buffer. diff --git a/docs/integrations-in-rancher/logging/logging-architecture.md b/docs/integrations-in-rancher/logging/logging-architecture.md index 560235f4f7c1..958bc5d30695 100644 --- a/docs/integrations-in-rancher/logging/logging-architecture.md +++ b/docs/integrations-in-rancher/logging/logging-architecture.md @@ -2,6 +2,10 @@ title: Architecture --- + + + + This section summarizes the architecture of the Rancher logging application. For more details about how the Logging operator works, see the [official documentation.](https://kube-logging.github.io/docs/#architecture) diff --git a/docs/integrations-in-rancher/logging/logging-helm-chart-options.md b/docs/integrations-in-rancher/logging/logging-helm-chart-options.md index fd6299abc4d2..643114f6d7cf 100644 --- a/docs/integrations-in-rancher/logging/logging-helm-chart-options.md +++ b/docs/integrations-in-rancher/logging/logging-helm-chart-options.md @@ -2,6 +2,10 @@ title: rancher-logging Helm Chart Options --- + + + + ### Enable/Disable Windows Node Logging You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`. diff --git a/docs/integrations-in-rancher/logging/rbac-for-logging.md b/docs/integrations-in-rancher/logging/rbac-for-logging.md index 627e3533c2cc..e718dce1887d 100644 --- a/docs/integrations-in-rancher/logging/rbac-for-logging.md +++ b/docs/integrations-in-rancher/logging/rbac-for-logging.md @@ -2,6 +2,10 @@ title: Role-based Access Control for Logging --- + + + + Rancher logging has two roles, `logging-admin` and `logging-view`. - `logging-admin` gives users full access to namespaced `Flows` and `Outputs` diff --git a/docs/integrations-in-rancher/logging/taints-and-tolerations.md b/docs/integrations-in-rancher/logging/taints-and-tolerations.md index c5cf0e355783..327cf554fdaa 100644 --- a/docs/integrations-in-rancher/logging/taints-and-tolerations.md +++ b/docs/integrations-in-rancher/logging/taints-and-tolerations.md @@ -2,6 +2,10 @@ title: Working with Taints and Tolerations --- + + + + "Tainting" a Kubernetes node causes pods to repel running on that node. Unless the pods have a `toleration` for that node's taint, they will run on other nodes in the cluster. diff --git a/docs/integrations-in-rancher/longhorn.md b/docs/integrations-in-rancher/longhorn.md index a8870b4e3a8e..d9daf5421a5a 100644 --- a/docs/integrations-in-rancher/longhorn.md +++ b/docs/integrations-in-rancher/longhorn.md @@ -2,6 +2,10 @@ title: Longhorn - Cloud native distributed block storage for Kubernetes --- + + + + [Longhorn](https://longhorn.io/) is a lightweight, reliable, and easy-to-use distributed block storage system for Kubernetes. Longhorn is free, open source software. Originally developed by Rancher Labs, it is now being developed as a sandbox project of the Cloud Native Computing Foundation. It can be installed on any Kubernetes cluster with Helm, with kubectl, or with the Rancher UI. You can learn more about its architecture [here.](https://longhorn.io/docs/latest/concepts/) diff --git a/docs/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md b/docs/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md index 9a464ba3ede2..0719a2be2362 100644 --- a/docs/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md +++ b/docs/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md @@ -2,6 +2,10 @@ title: Built-in Dashboards --- + + + + ## Grafana UI [Grafana](https://grafana.com/grafana/) allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture. diff --git a/docs/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md b/docs/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md index de110cd3d805..bea67b1dc8f3 100644 --- a/docs/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md +++ b/docs/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md @@ -2,6 +2,10 @@ title: How Monitoring Works --- + + + + ## 1. Architecture Overview _**The following sections describe how data flows through the Monitoring V2 application:**_ diff --git a/docs/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md b/docs/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md index e301620d92ac..0ea6b134300e 100644 --- a/docs/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md +++ b/docs/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md @@ -2,6 +2,10 @@ title: PromQL Expression Reference --- + + + + The PromQL expressions in this doc can be used to configure alerts. For more information about querying the Prometheus time series database, refer to the official [Prometheus documentation.](https://prometheus.io/docs/prometheus/latest/querying/basics/) diff --git a/docs/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md b/docs/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md index 61c9165a54dd..40014bcba442 100644 --- a/docs/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md +++ b/docs/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md @@ -1,6 +1,11 @@ --- title: Role-based Access Control --- + + + + + This section describes the expectations for RBAC for Rancher Monitoring. diff --git a/docs/integrations-in-rancher/monitoring-and-alerting/windows-support.md b/docs/integrations-in-rancher/monitoring-and-alerting/windows-support.md index 6fa7e1d84d01..8869e2cefe52 100644 --- a/docs/integrations-in-rancher/monitoring-and-alerting/windows-support.md +++ b/docs/integrations-in-rancher/monitoring-and-alerting/windows-support.md @@ -2,6 +2,10 @@ title: Windows Cluster Support for Monitoring V2 --- + + + + _Available as of v2.5.8_ Starting at Monitoring V2 14.5.100 (used by default in Rancher 2.5.8), Monitoring V2 can now be deployed on a Windows cluster and will scrape metrics from Windows nodes using [prometheus-community/windows_exporter](https://github.com/prometheus-community/windows_exporter) (previously named `wmi_exporter`). diff --git a/docs/integrations-in-rancher/neuvector.md b/docs/integrations-in-rancher/neuvector.md index bac4c2c0849d..fbc5eccec1c5 100644 --- a/docs/integrations-in-rancher/neuvector.md +++ b/docs/integrations-in-rancher/neuvector.md @@ -2,6 +2,10 @@ title: NeuVector Integration --- + + + + ### NeuVector Integration in Rancher [NeuVector 5.x](https://open-docs.neuvector.com/) is an open-source container-centric security platform that is integrated with Rancher. NeuVector offers real-time compliance, visibility, and protection for critical applications and data during runtime. NeuVector provides a firewall, container process/file system monitoring, security auditing with CIS benchmarks, and vulnerability scanning. For more information on Rancher security, please see the [security documentation](../pages-for-subheaders/rancher-security.md). diff --git a/docs/integrations-in-rancher/opa-gatekeeper.md b/docs/integrations-in-rancher/opa-gatekeeper.md index 26b49d6f8919..f2185dff6855 100644 --- a/docs/integrations-in-rancher/opa-gatekeeper.md +++ b/docs/integrations-in-rancher/opa-gatekeeper.md @@ -2,6 +2,10 @@ title: OPA Gatekeeper --- + + + + To ensure consistency and compliance, every organization needs the ability to define and enforce policies in its environment in an automated way. [OPA (Open Policy Agent)](https://www.openpolicyagent.org/) is a policy engine that facilitates policy-based control for cloud native environments. Rancher provides the ability to enable OPA Gatekeeper in Kubernetes clusters, and also installs a couple of built-in policy definitions, which are also called constraint templates. OPA provides a high-level declarative language that lets you specify policy as code and ability to extend simple APIs to offload policy decision-making. diff --git a/docs/integrations-in-rancher/rancher-extensions.md b/docs/integrations-in-rancher/rancher-extensions.md index 0a86dad7d2da..34929c4bf113 100644 --- a/docs/integrations-in-rancher/rancher-extensions.md +++ b/docs/integrations-in-rancher/rancher-extensions.md @@ -2,6 +2,10 @@ title: Rancher Extensions --- + + + + New in Rancher v2.7.0, Rancher introduces **extensions**. Extensions allow users, developers, partners, and customers to extend and enhance the Rancher UI. In addition, users can make changes and create enhancements to their UI functionality independent of Rancher releases. Extensions will enable users to build on top of Rancher to better tailor it to their respective environments. Note that users will also have the ability to update to new versions as well as roll back to a previous version. Extensions are Helm charts that can only be installed once into a cluster; therefore, these charts have been simplified and separated from the general Helm charts listed under **Apps**. diff --git a/docs/reference-guides/kubernetes-concepts.md b/docs/reference-guides/kubernetes-concepts.md index 631b7cfd118e..707fb8e1c514 100644 --- a/docs/reference-guides/kubernetes-concepts.md +++ b/docs/reference-guides/kubernetes-concepts.md @@ -2,6 +2,10 @@ title: Kubernetes Concepts --- + + + + This page explains concepts related to Kubernetes that are important for understanding how Rancher works. The descriptions below provide a simplified overview of Kubernetes components. For more details, refer to the [official documentation on Kubernetes components.](https://kubernetes.io/docs/concepts/overview/components/) ## About Docker diff --git a/docs/reference-guides/rancher-cluster-tools.md b/docs/reference-guides/rancher-cluster-tools.md index 41305db554b6..63d8490bc412 100644 --- a/docs/reference-guides/rancher-cluster-tools.md +++ b/docs/reference-guides/rancher-cluster-tools.md @@ -2,6 +2,10 @@ title: Cluster Tools for Logging, Monitoring, and Visibility --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. Tools are divided into following categories: diff --git a/docs/reference-guides/rancher-project-tools.md b/docs/reference-guides/rancher-project-tools.md index 567cc5cf408c..f199d246d2c3 100644 --- a/docs/reference-guides/rancher-project-tools.md +++ b/docs/reference-guides/rancher-project-tools.md @@ -2,6 +2,10 @@ title: Project Tools for Logging, Monitoring, and Visibility --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. diff --git a/docs/reference-guides/rke1-template-example-yaml.md b/docs/reference-guides/rke1-template-example-yaml.md index d14e3863ff97..5827dc55fccf 100644 --- a/docs/reference-guides/rke1-template-example-yaml.md +++ b/docs/reference-guides/rke1-template-example-yaml.md @@ -2,6 +2,10 @@ title: RKE1 Example YAML --- + + + + Below is an example RKE template configuration file for reference. The YAML in the RKE template uses the same customization that is used when you create an RKE cluster. However, since the YAML is within the context of a Rancher provisioned RKE cluster, the customization from the RKE docs needs to be nested under the `rancher_kubernetes_engine` directive. diff --git a/docs/reference-guides/system-tools.md b/docs/reference-guides/system-tools.md index d9480976757f..8c5aa2e441cd 100644 --- a/docs/reference-guides/system-tools.md +++ b/docs/reference-guides/system-tools.md @@ -2,6 +2,10 @@ title: System Tools --- + + + + :::note System Tools has been deprecated since June 2022. diff --git a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md index 86585268cf81..7e7efea32f58 100644 --- a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md +++ b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md @@ -2,6 +2,10 @@ title: Skipped and Not Applicable Tests --- + + + + This section lists the tests that are skipped in the permissive test profile for RKE. All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. diff --git a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/cpu-and-memory-allocations.md b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/cpu-and-memory-allocations.md index 01545b281bc5..21388afa8712 100644 --- a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/cpu-and-memory-allocations.md +++ b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/cpu-and-memory-allocations.md @@ -2,6 +2,10 @@ title: CPU and Memory Allocations --- + + + + _Available as of v2.3.0_ This section describes the minimum recommended computing resources for the Istio components in a cluster. diff --git a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/disable-istio.md b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/disable-istio.md index 6baf169687a9..a56cb88822c9 100644 --- a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/disable-istio.md +++ b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/disable-istio.md @@ -2,6 +2,10 @@ title: Disabling Istio --- + + + + This section describes how to disable Istio in a cluster, namespace, or workload. ## Disable Istio in a Cluster diff --git a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/rbac-for-istio.md b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/rbac-for-istio.md index 0fccf48ad80c..d1a2034f07cd 100644 --- a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/rbac-for-istio.md +++ b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/istio/rbac-for-istio.md @@ -2,6 +2,10 @@ title: Role-based Access Control --- + + + + This section describes the permissions required to access Istio features and how to configure access to the Kiali and Jaeger visualizations. ## Cluster-level Access diff --git a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/opa-gatekeeper.md b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/opa-gatekeeper.md index 80b6ca1249a2..2bf382a5451e 100644 --- a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/opa-gatekeeper.md +++ b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/opa-gatekeeper.md @@ -2,6 +2,10 @@ title: OPA Gatekeeper --- + + + + _Available as of v2.4.0_ To ensure consistency and compliance, every organization needs the ability to define and enforce policies in its environment in an automated way. [OPA (Open Policy Agent)](https://www.openpolicyagent.org/) is a policy engine that facilitates policy-based control for cloud native environments. Rancher provides the ability to enable OPA Gatekeeper in Kubernetes clusters, and also installs a couple of built-in policy definitions, which are also called constraint templates. diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md index 66c14f380a42..788a7723d19e 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md @@ -2,6 +2,10 @@ title: '1. Set up Infrastructure and Private Registry' --- + + + + In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private Docker registry that must be available to your Rancher node(s). An air gapped environment is an environment where the Rancher server is installed offline or behind a firewall. diff --git a/versioned_docs/version-2.0-2.4/reference-guides/kubernetes-concepts.md b/versioned_docs/version-2.0-2.4/reference-guides/kubernetes-concepts.md index c2918cc6de77..08f0b39de3cd 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/kubernetes-concepts.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/kubernetes-concepts.md @@ -2,6 +2,10 @@ title: Kubernetes Concepts --- + + + + This page explains concepts related to Kubernetes that are important for understanding how Rancher works. The descriptions below provide a simplified interview of Kubernetes components. For more details, refer to the [official documentation on Kubernetes components.](https://kubernetes.io/docs/concepts/overview/components/) diff --git a/versioned_docs/version-2.0-2.4/reference-guides/rancher-cluster-tools.md b/versioned_docs/version-2.0-2.4/reference-guides/rancher-cluster-tools.md index 22150f89f8e3..42a8ea82b608 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/rancher-cluster-tools.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/rancher-cluster-tools.md @@ -2,6 +2,10 @@ title: Tools for Logging, Monitoring, and More --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. diff --git a/versioned_docs/version-2.0-2.4/reference-guides/rke1-template-example-yaml.md b/versioned_docs/version-2.0-2.4/reference-guides/rke1-template-example-yaml.md index d14e3863ff97..5827dc55fccf 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/rke1-template-example-yaml.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/rke1-template-example-yaml.md @@ -2,6 +2,10 @@ title: RKE1 Example YAML --- + + + + Below is an example RKE template configuration file for reference. The YAML in the RKE template uses the same customization that is used when you create an RKE cluster. However, since the YAML is within the context of a Rancher provisioned RKE cluster, the customization from the RKE docs needs to be nested under the `rancher_kubernetes_engine` directive. diff --git a/versioned_docs/version-2.0-2.4/reference-guides/system-tools.md b/versioned_docs/version-2.0-2.4/reference-guides/system-tools.md index ddde9d731a62..7027331038a9 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/system-tools.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/system-tools.md @@ -2,6 +2,10 @@ title: System Tools --- + + + + System Tools is a tool to perform operational tasks on [Rancher Launched Kubernetes](../pages-for-subheaders/launch-kubernetes-with-rancher.md) clusters or [installations of Rancher on an RKE cluster.](../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md) The tasks include: * Collect logging and system metrics from nodes. diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/configuration-reference.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/configuration-reference.md index df1ca5c2d007..74d88127011f 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/configuration-reference.md @@ -2,6 +2,10 @@ title: Configuration --- + + + + This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. To configure the custom resources, go to the **Cluster Explorer** in the Rancher UI. In dropdown menu in the top left corner, click **Cluster Explorer > CIS Benchmark.** diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/custom-benchmark.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/custom-benchmark.md index 97a39d1a611b..212d7b8d2c79 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/custom-benchmark.md @@ -2,6 +2,10 @@ title: Creating a Custom Benchmark Version for Running a Cluster Scan --- + + + + _Available as of v2.5.4_ Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the kube-bench tool. diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md index 8a88240d963f..795e64cef29b 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md @@ -2,6 +2,10 @@ title: Roles-based Access Control --- + + + + This section describes the permissions required to use the rancher-cis-benchmark App. The rancher-cis-benchmark is a cluster-admin only feature by default. diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md index b11887c76c1c..3920a1588c51 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md @@ -2,6 +2,10 @@ title: Skipped and Not Applicable Tests --- + + + + This section lists the tests that are skipped in the permissive test profile for RKE. > All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/architecture.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/architecture.md index f012a3a9921c..9d64e38de41b 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/architecture.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/architecture.md @@ -2,6 +2,10 @@ title: Architecture --- + + + + Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, or Kustomize or any combination of the three. Regardless of the source, all resources are dynamically turned into Helm charts, and Helm is used as the engine to deploy everything in the cluster. This gives you a high degree of control, consistency, and auditability. Fleet focuses not only on the ability to scale, but to give one a high degree of control and visibility to exactly what is installed on the cluster. ![Architecture](/img/fleet-architecture.svg) diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md index cf50340b7c7e..3d4d2734b30f 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md @@ -2,6 +2,10 @@ title: Using Fleet Behind a Proxy --- + + + + _Available as of v2.5.8_ In this section, you'll learn how to enable Fleet in a setup that has a Rancher server with a public IP a Kubernetes cluster that has no public IP, but is configured to use a proxy. diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md index aea98b74dbc0..f7bf04055f98 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md @@ -2,6 +2,10 @@ title: Windows Support --- + + + + Prior to Rancher v2.5.6, the `agent` did not have native Windows manifests on downstream clusters with Windows nodes. This would result in a failing `agent` pod for the cluster. If you are upgrading from an older version of Rancher to v2.5.6+, you can deploy a working `agent` with the following workflow *in the downstream cluster*: diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md index 24b9be34781d..b1e1387c3938 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md @@ -2,6 +2,10 @@ title: Additional Steps for Installing Istio on an RKE2 Cluster --- + + + + Through the **Cluster Explorer,** when installing or upgrading Istio through **Apps & Marketplace,** 1. Click **Components.** diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/pod-security-policies.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/pod-security-policies.md index 99cf8714c406..f9a2ad7ca5c1 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/pod-security-policies.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/pod-security-policies.md @@ -2,6 +2,10 @@ title: Enable Istio with Pod Security Policies --- + + + + If you have restrictive Pod Security Policies enabled, then Istio may not be able to function correctly, because it needs certain permissions in order to install itself and manage pod infrastructure. In this section, we will configure a cluster with PSPs enabled for an Istio install, and also set up the Istio CNI plugin. The Istio CNI plugin removes the need for each application pod to have a privileged `NET_ADMIN` container. For further information, see the [Istio CNI Plugin docs](https://istio.io/docs/setup/additional-setup/cni). Please note that the [Istio CNI Plugin is in alpha](https://istio.io/about/feature-stages/). diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/project-network-isolation.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/project-network-isolation.md index 16fde314183b..f51a033ce3e3 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/project-network-isolation.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/project-network-isolation.md @@ -2,6 +2,10 @@ title: Additional Steps for Project Network Isolation --- + + + + In clusters where: - You are using the Canal network plugin with Rancher before v2.5.8, or you are using Rancher v2.5.8+ with an any RKE network plug-in that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md index c4ee390f09a2..449d1a336dda 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md @@ -2,6 +2,10 @@ title: Selectors and Scrape Configs --- + + + + The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false`, which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/cpu-and-memory-allocations.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/cpu-and-memory-allocations.md index 5c23cb4d36c5..d032d192b488 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/cpu-and-memory-allocations.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/cpu-and-memory-allocations.md @@ -2,6 +2,10 @@ title: CPU and Memory Allocations --- + + + + This section describes the minimum recommended computing resources for the Istio components in a cluster. The CPU and memory allocations for each component are [configurable.](#configuring-resource-allocations) diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/disable-istio.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/disable-istio.md index 3e8839400701..dc699980d672 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/disable-istio.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/disable-istio.md @@ -2,6 +2,10 @@ title: Disabling Istio --- + + + + This section describes how to uninstall Istio in a cluster or disable a namespace, or workload. ## Uninstall Istio in a Cluster diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/rbac-for-istio.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/rbac-for-istio.md index e33bdb725403..b92096b6c9b5 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/rbac-for-istio.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/rbac-for-istio.md @@ -2,6 +2,10 @@ title: Role-based Access Control --- + + + + This section describes the permissions required to access Istio features. The rancher istio chart installs three `ClusterRoles` diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md index f67c358978a9..446662f54212 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md @@ -2,6 +2,10 @@ title: Flows and ClusterFlows --- + + + + See the [Logging operator documentation](https://kube-logging.github.io/docs/configuration/flow/) for the full details on how to configure `Flows` and `ClusterFlows`. See [Rancher Integration with Logging Services: Troubleshooting](../../../../pages-for-subheaders/logging.md#The-Logging-Buffer-Overloads-Pods) for how to resolve memory problems with the logging buffer. diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md index b2245c017a5e..933705bc0ee8 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md @@ -2,6 +2,10 @@ title: Outputs and ClusterOutputs --- + + + + See the [Logging operator documentation](https://kube-logging.github.io/docs/configuration/flow/) for the full details on how to configure `Flows` and `ClusterFlows`. See [Rancher Integration with Logging Services: Troubleshooting](../../../../pages-for-subheaders/logging.md#The-Logging-Buffer-Overloads-Pods) for how to resolve memory problems with the logging buffer. diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/logging-architecture.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/logging-architecture.md index e8cd6aef7896..5c333557df57 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/logging-architecture.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/logging-architecture.md @@ -2,6 +2,10 @@ title: Architecture --- + + + + This section summarizes the architecture of the Rancher logging application. For more details about how the Logging operator works, see the [official documentation.](https://kube-logging.github.io/docs/#architecture) diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/logging-helm-chart-options.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/logging-helm-chart-options.md index 4a1f496a82b4..17b63428227a 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/logging-helm-chart-options.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/logging-helm-chart-options.md @@ -2,6 +2,10 @@ title: rancher-logging Helm Chart Options --- + + + + ### Enable/Disable Windows Node Logging _Available as of v2.5.8_ diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/rbac-for-logging.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/rbac-for-logging.md index 53bad86389f2..c82aabe4a2ed 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/rbac-for-logging.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/rbac-for-logging.md @@ -2,6 +2,10 @@ title: Role-based Access Control for Logging --- + + + + Rancher logging has two roles, `logging-admin` and `logging-view`. - `logging-admin` gives users full access to namespaced `Flows` and `Outputs` diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/taints-and-tolerations.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/taints-and-tolerations.md index c15ebd53d114..89633fa68d66 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/taints-and-tolerations.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/taints-and-tolerations.md @@ -2,6 +2,10 @@ title: Working with Taints and Tolerations --- + + + + "Tainting" a Kubernetes node causes pods to repel running on that node. Unless the pods have a `toleration` for that node's taint, they will run on other nodes in the cluster. diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md index 70393d7b08ac..0f6e660a9550 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md @@ -2,6 +2,10 @@ title: Built-in Dashboards --- + + + + ## Grafana UI [Grafana](https://grafana.com/grafana/) allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture. diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md index 045e7cbcc7e5..e87edfc7a982 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md @@ -2,6 +2,10 @@ title: How Monitoring Works --- + + + + ## 1. Architecture Overview _**The following sections describe how data flows through the Monitoring V2 application:**_ diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md index e301620d92ac..0ea6b134300e 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md @@ -2,6 +2,10 @@ title: PromQL Expression Reference --- + + + + The PromQL expressions in this doc can be used to configure alerts. For more information about querying the Prometheus time series database, refer to the official [Prometheus documentation.](https://prometheus.io/docs/prometheus/latest/querying/basics/) diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md index 6d7b14a4d3b4..5534f56cb2ba 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md @@ -2,6 +2,10 @@ title: Role-based Access Control --- + + + + This section describes the expectations for RBAC for Rancher Monitoring. ## Cluster Admins diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/windows-support.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/windows-support.md index 2a76dc8cf233..8f4088cb08cf 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/windows-support.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/windows-support.md @@ -2,6 +2,10 @@ title: Windows Cluster Support for Monitoring V2 --- + + + + _Available as of v2.5.8_ Starting at Monitoring V2 14.5.100 (used by default in Rancher 2.5.8), Monitoring V2 can now be deployed on a Windows cluster and will scrape metrics from Windows nodes using [prometheus-community/windows_exporter](https://github.com/prometheus-community/windows_exporter) (previously named `wmi_exporter`). diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/opa-gatekeeper.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/opa-gatekeeper.md index f952b80bf3ef..422fdbbbd458 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/opa-gatekeeper.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/opa-gatekeeper.md @@ -2,6 +2,10 @@ title: OPA Gatekeeper --- + + + + To ensure consistency and compliance, every organization needs the ability to define and enforce policies in its environment in an automated way. [OPA (Open Policy Agent)](https://www.openpolicyagent.org/) is a policy engine that facilitates policy-based control for cloud native environments. Rancher provides the ability to enable OPA Gatekeeper in Kubernetes clusters, and also installs a couple of built-in policy definitions, which are also called constraint templates. OPA provides a high-level declarative language that lets you specify policy as code and ability to extend simple APIs to offload policy decision-making. diff --git a/versioned_docs/version-2.5/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md b/versioned_docs/version-2.5/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md index b8985effb415..cca465cee503 100644 --- a/versioned_docs/version-2.5/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md +++ b/versioned_docs/version-2.5/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md @@ -2,6 +2,10 @@ title: '1. Set up Infrastructure and Private Registry' --- + + + + In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private Docker registry that must be available to your Rancher node(s). An air gapped environment is an environment where the Rancher server is installed offline or behind a firewall. diff --git a/versioned_docs/version-2.5/reference-guides/kubernetes-concepts.md b/versioned_docs/version-2.5/reference-guides/kubernetes-concepts.md index c2918cc6de77..08f0b39de3cd 100644 --- a/versioned_docs/version-2.5/reference-guides/kubernetes-concepts.md +++ b/versioned_docs/version-2.5/reference-guides/kubernetes-concepts.md @@ -2,6 +2,10 @@ title: Kubernetes Concepts --- + + + + This page explains concepts related to Kubernetes that are important for understanding how Rancher works. The descriptions below provide a simplified interview of Kubernetes components. For more details, refer to the [official documentation on Kubernetes components.](https://kubernetes.io/docs/concepts/overview/components/) diff --git a/versioned_docs/version-2.5/reference-guides/rancher-cluster-tools.md b/versioned_docs/version-2.5/reference-guides/rancher-cluster-tools.md index 25743e44b37d..a68d7aeb4dcb 100644 --- a/versioned_docs/version-2.5/reference-guides/rancher-cluster-tools.md +++ b/versioned_docs/version-2.5/reference-guides/rancher-cluster-tools.md @@ -2,6 +2,10 @@ title: Tools for Logging, Monitoring, and Visibility --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. ## Logging diff --git a/versioned_docs/version-2.5/reference-guides/rancher-project-tools.md b/versioned_docs/version-2.5/reference-guides/rancher-project-tools.md index 3a16e45b0b93..9b5cdb91f5de 100644 --- a/versioned_docs/version-2.5/reference-guides/rancher-project-tools.md +++ b/versioned_docs/version-2.5/reference-guides/rancher-project-tools.md @@ -2,6 +2,10 @@ title: Tools for Logging, Monitoring, and Visibility --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. ## Notifiers and Alerts diff --git a/versioned_docs/version-2.5/reference-guides/rke1-template-example-yaml.md b/versioned_docs/version-2.5/reference-guides/rke1-template-example-yaml.md index d14e3863ff97..5827dc55fccf 100644 --- a/versioned_docs/version-2.5/reference-guides/rke1-template-example-yaml.md +++ b/versioned_docs/version-2.5/reference-guides/rke1-template-example-yaml.md @@ -2,6 +2,10 @@ title: RKE1 Example YAML --- + + + + Below is an example RKE template configuration file for reference. The YAML in the RKE template uses the same customization that is used when you create an RKE cluster. However, since the YAML is within the context of a Rancher provisioned RKE cluster, the customization from the RKE docs needs to be nested under the `rancher_kubernetes_engine` directive. diff --git a/versioned_docs/version-2.5/reference-guides/system-tools.md b/versioned_docs/version-2.5/reference-guides/system-tools.md index eae035fb6016..73d75818ab3d 100644 --- a/versioned_docs/version-2.5/reference-guides/system-tools.md +++ b/versioned_docs/version-2.5/reference-guides/system-tools.md @@ -2,6 +2,10 @@ title: System Tools --- + + + + System Tools is a tool to perform operational tasks on [Rancher Launched Kubernetes](../pages-for-subheaders/launch-kubernetes-with-rancher.md) clusters or [installations of Rancher on an RKE cluster.](../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md) The tasks include: * Collect logging and system metrics from nodes. diff --git a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/installation-requirements/dockershim.md b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/installation-requirements/dockershim.md index e215e0cfc2e6..211141cb7044 100644 --- a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/installation-requirements/dockershim.md +++ b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/installation-requirements/dockershim.md @@ -2,6 +2,10 @@ title: Dockershim --- + + + + The Dockershim is the CRI compliant layer between the Kubelet and the Docker daemon. As part of the Kubernetes 1.20 release, the [deprecation of the in-tree Dockershim was announced](https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/). For more information on the deprecation and its timelines, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed). RKE clusters now support the external Dockershim to continue leveraging Docker as the CRI runtime. We now implement the upstream open source community external Dockershim announced by [Mirantis and Docker](https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/) to ensure RKE clusters can continue to leverage Docker. diff --git a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md index c11353542c64..a04034398815 100644 --- a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md +++ b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md @@ -2,6 +2,10 @@ title: '1. Set up Infrastructure and Private Registry' --- + + + + In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private Docker registry that must be available to your Rancher node(s). An air gapped environment is an environment where the Rancher server is installed offline or behind a firewall. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/configuration-reference.md b/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/configuration-reference.md index fa9012e6ed80..0403956be56b 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/configuration-reference.md @@ -2,6 +2,10 @@ title: Configuration --- + + + + This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans, diff --git a/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/custom-benchmark.md b/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/custom-benchmark.md index 8ba63bbe8f7b..47853e45c147 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/custom-benchmark.md @@ -2,6 +2,10 @@ title: Creating a Custom Benchmark Version for Running a Cluster Scan --- + + + + Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the kube-bench tool. The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md b/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md index 8a88240d963f..795e64cef29b 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md @@ -2,6 +2,10 @@ title: Roles-based Access Control --- + + + + This section describes the permissions required to use the rancher-cis-benchmark App. The rancher-cis-benchmark is a cluster-admin only feature by default. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md b/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md index b11887c76c1c..3920a1588c51 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md @@ -2,6 +2,10 @@ title: Skipped and Not Applicable Tests --- + + + + This section lists the tests that are skipped in the permissive test profile for RKE. > All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md b/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md index 42937ea22622..0116f4d9f630 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md @@ -2,6 +2,10 @@ title: Prerequisites --- + + + + ### 1. Setting Up License Manager and Purchasing Support First, complete the [first step](https://docs.aws.amazon.com/license-manager/latest/userguide/getting-started.html) of the license manager one-time setup. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md b/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md index 02e6bb86340a..c8f8e51cce76 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md @@ -2,6 +2,10 @@ title: Common Issues --- + + + + **After installing the adapter, a banner message appears in Rancher that says "AWS Marketplace Adapter: Unable to run the adapter, please check the adapter logs"** This error indicates that while the adapter was installed into the cluster, an error has occurred which prevents it from properly checking-in/checking-out licenses. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md b/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md index 16e0ac3443ef..51e11fe03606 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md @@ -2,6 +2,10 @@ title: Uninstalling The Adapter --- + + + + ### 1. Uninstall the adapter chart using helm. ```bash diff --git a/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/supportconfig.md b/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/supportconfig.md index 9d87830a82eb..6eecac1132a7 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/supportconfig.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/supportconfig.md @@ -2,6 +2,10 @@ title: Supportconfig bundle --- + + + + After installing the CSP adapter, you will have the ability to generate a supportconfig bundle. This bundle is a tar file which can be used to quickly provide information to support. These bundles can be created through Rancher or through direct access to the cluster that Rancher is installed on. Note that accessing through Rancher is preferred. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/architecture.md b/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/architecture.md index f012a3a9921c..9d64e38de41b 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/architecture.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/architecture.md @@ -2,6 +2,10 @@ title: Architecture --- + + + + Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, or Kustomize or any combination of the three. Regardless of the source, all resources are dynamically turned into Helm charts, and Helm is used as the engine to deploy everything in the cluster. This gives you a high degree of control, consistency, and auditability. Fleet focuses not only on the ability to scale, but to give one a high degree of control and visibility to exactly what is installed on the cluster. ![Architecture](/img/fleet-architecture.svg) diff --git a/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md b/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md index e6a3f8cf9618..6160a19672a3 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md @@ -2,6 +2,10 @@ title: Using Fleet Behind a Proxy --- + + + + In this section, you'll learn how to enable Fleet in a setup that has a Rancher server with a public IP a Kubernetes cluster that has no public IP, but is configured to use a proxy. Rancher does not establish connections with registered downstream clusters. The Rancher agent deployed on the downstream cluster must be able to establish the connection with Rancher. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md b/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md index aea98b74dbc0..f7bf04055f98 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md @@ -2,6 +2,10 @@ title: Windows Support --- + + + + Prior to Rancher v2.5.6, the `agent` did not have native Windows manifests on downstream clusters with Windows nodes. This would result in a failing `agent` pod for the cluster. If you are upgrading from an older version of Rancher to v2.5.6+, you can deploy a working `agent` with the following workflow *in the downstream cluster*: diff --git a/versioned_docs/version-2.6/integrations-in-rancher/harvester.md b/versioned_docs/version-2.6/integrations-in-rancher/harvester.md index 9cad5c128b71..c5ec813db874 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/harvester.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/harvester.md @@ -2,6 +2,10 @@ title: Harvester Integration --- + + + + Introduced in Rancher v2.6.1, [Harvester](https://docs.harvesterhci.io/) is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require users to know Kubernetes concepts, making it a more user-friendly application. --- diff --git a/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md b/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md index cbc8d15d1b8e..f9df9d472310 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md @@ -2,6 +2,10 @@ title: Additional Steps for Installing Istio on an RKE2 Cluster --- + + + + When installing or upgrading the Istio Helm chart through **Apps & Marketplace** (Rancher before v2.6.5) or **Apps** (Rancher v2.6.5+), 1. If you are installing the chart, click **Customize Helm options before install** and click **Next**. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/pod-security-policies.md b/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/pod-security-policies.md index 7580d11dec0f..902b200433c3 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/pod-security-policies.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/pod-security-policies.md @@ -2,6 +2,10 @@ title: Enable Istio with Pod Security Policies --- + + + + If you have restrictive Pod Security Policies enabled, then Istio may not be able to function correctly, because it needs certain permissions in order to install itself and manage pod infrastructure. In this section, we will configure a cluster with PSPs enabled for an Istio install, and also set up the Istio CNI plugin. The Istio CNI plugin removes the need for each application pod to have a privileged `NET_ADMIN` container. For further information, see the [Istio CNI Plugin docs](https://istio.io/docs/setup/additional-setup/cni). Please note that the [Istio CNI Plugin is in alpha](https://istio.io/about/feature-stages/). diff --git a/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/project-network-isolation.md b/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/project-network-isolation.md index 16fde314183b..f51a033ce3e3 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/project-network-isolation.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/project-network-isolation.md @@ -2,6 +2,10 @@ title: Additional Steps for Project Network Isolation --- + + + + In clusters where: - You are using the Canal network plugin with Rancher before v2.5.8, or you are using Rancher v2.5.8+ with an any RKE network plug-in that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin diff --git a/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md b/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md index 10907f4718cf..29b51149c972 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md @@ -2,6 +2,10 @@ title: Selectors and Scrape Configs --- + + + + The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false`, which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/istio/cpu-and-memory-allocations.md b/versioned_docs/version-2.6/integrations-in-rancher/istio/cpu-and-memory-allocations.md index 9ccf8b6c701b..fd6baf3a4a35 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/istio/cpu-and-memory-allocations.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/istio/cpu-and-memory-allocations.md @@ -2,6 +2,10 @@ title: CPU and Memory Allocations --- + + + + This section describes the minimum recommended computing resources for the Istio components in a cluster. The CPU and memory allocations for each component are [configurable.](#configuring-resource-allocations) diff --git a/versioned_docs/version-2.6/integrations-in-rancher/istio/disable-istio.md b/versioned_docs/version-2.6/integrations-in-rancher/istio/disable-istio.md index 387bbcbe4df3..91bffb878c52 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/istio/disable-istio.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/istio/disable-istio.md @@ -2,6 +2,10 @@ title: Disabling Istio --- + + + + This section describes how to uninstall Istio in a cluster or disable a namespace, or workload. ## Uninstall Istio in a Cluster diff --git a/versioned_docs/version-2.6/integrations-in-rancher/istio/rbac-for-istio.md b/versioned_docs/version-2.6/integrations-in-rancher/istio/rbac-for-istio.md index e33bdb725403..b92096b6c9b5 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/istio/rbac-for-istio.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/istio/rbac-for-istio.md @@ -2,6 +2,10 @@ title: Role-based Access Control --- + + + + This section describes the permissions required to access Istio features. The rancher istio chart installs three `ClusterRoles` diff --git a/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md b/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md index e4020351ab15..d6d2ccd67e27 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md @@ -2,6 +2,10 @@ title: Flows and ClusterFlows --- + + + + See the [Logging operator documentation](https://kube-logging.github.io/docs/configuration/flow/) for the full details on how to configure `Flows` and `ClusterFlows`. See [Rancher Integration with Logging Services: Troubleshooting](../../../pages-for-subheaders/logging.md#The-Logging-Buffer-Overloads-Pods) for how to resolve memory problems with the logging buffer. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md b/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md index da6ed1cef3da..095539712508 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md @@ -2,6 +2,10 @@ title: Outputs and ClusterOutputs --- + + + + See the [Logging operator documentation](https://kube-logging.github.io/docs/configuration/flow/) for the full details on how to configure `Flows` and `ClusterFlows`. See [Rancher Integration with Logging Services: Troubleshooting](../../../pages-for-subheaders/logging.md#The-Logging-Buffer-Overloads-Pods) for how to resolve memory problems with the logging buffer. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/logging/logging-architecture.md b/versioned_docs/version-2.6/integrations-in-rancher/logging/logging-architecture.md index 9a5d52eb0f8b..f6c71cdc27a4 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/logging/logging-architecture.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/logging/logging-architecture.md @@ -2,6 +2,10 @@ title: Architecture --- + + + + This section summarizes the architecture of the Rancher logging application. For more details about how the Logging operator works, see the [official documentation.](https://kube-logging.github.io/docs/#architecture) diff --git a/versioned_docs/version-2.6/integrations-in-rancher/logging/logging-helm-chart-options.md b/versioned_docs/version-2.6/integrations-in-rancher/logging/logging-helm-chart-options.md index 8bc12fcbd068..02adb890d741 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/logging/logging-helm-chart-options.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/logging/logging-helm-chart-options.md @@ -2,6 +2,10 @@ title: rancher-logging Helm Chart Options --- + + + + ### Enable/Disable Windows Node Logging You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/logging/rbac-for-logging.md b/versioned_docs/version-2.6/integrations-in-rancher/logging/rbac-for-logging.md index 627e3533c2cc..e718dce1887d 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/logging/rbac-for-logging.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/logging/rbac-for-logging.md @@ -2,6 +2,10 @@ title: Role-based Access Control for Logging --- + + + + Rancher logging has two roles, `logging-admin` and `logging-view`. - `logging-admin` gives users full access to namespaced `Flows` and `Outputs` diff --git a/versioned_docs/version-2.6/integrations-in-rancher/logging/taints-and-tolerations.md b/versioned_docs/version-2.6/integrations-in-rancher/logging/taints-and-tolerations.md index c5cf0e355783..327cf554fdaa 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/logging/taints-and-tolerations.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/logging/taints-and-tolerations.md @@ -2,6 +2,10 @@ title: Working with Taints and Tolerations --- + + + + "Tainting" a Kubernetes node causes pods to repel running on that node. Unless the pods have a `toleration` for that node's taint, they will run on other nodes in the cluster. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md index 5acbd726024c..d6e7df25263a 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md @@ -2,6 +2,10 @@ title: Built-in Dashboards --- + + + + ## Grafana UI [Grafana](https://grafana.com/grafana/) allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md index de110cd3d805..bea67b1dc8f3 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md @@ -2,6 +2,10 @@ title: How Monitoring Works --- + + + + ## 1. Architecture Overview _**The following sections describe how data flows through the Monitoring V2 application:**_ diff --git a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md index e301620d92ac..0ea6b134300e 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md @@ -2,6 +2,10 @@ title: PromQL Expression Reference --- + + + + The PromQL expressions in this doc can be used to configure alerts. For more information about querying the Prometheus time series database, refer to the official [Prometheus documentation.](https://prometheus.io/docs/prometheus/latest/querying/basics/) diff --git a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md index 9ef6362e49b6..0bb382fa95ea 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md @@ -1,6 +1,11 @@ --- title: Role-based Access Control --- + + + + + This section describes the expectations for RBAC for Rancher Monitoring. diff --git a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/windows-support.md b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/windows-support.md index d36e2e667683..9ed2d2b01241 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/windows-support.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/windows-support.md @@ -2,6 +2,10 @@ title: Windows Cluster Support for Monitoring V2 --- + + + + _Available as of v2.5.8_ Starting at Monitoring V2 14.5.100 (used by default in Rancher 2.5.8), Monitoring V2 can now be deployed on a Windows cluster and will scrape metrics from Windows nodes using [prometheus-community/windows_exporter](https://github.com/prometheus-community/windows_exporter) (previously named `wmi_exporter`). diff --git a/versioned_docs/version-2.6/integrations-in-rancher/neuvector.md b/versioned_docs/version-2.6/integrations-in-rancher/neuvector.md index c169b5c9bffe..6afe9c9a270c 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/neuvector.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/neuvector.md @@ -2,6 +2,10 @@ title: NeuVector Integration --- + + + + ### NeuVector Integration in Rancher New in Rancher v2.6.5, [NeuVector 5.x](https://open-docs.neuvector.com/) is an open-source container-centric security platform that is now integrated into Rancher. NeuVector offers real-time compliance, visibility, and protection for critical applications and data during runtime. NeuVector provides a firewall, container process/file system monitoring, security auditing with CIS benchmarks, and vulnerability scanning. For more information on Rancher security, please see the [security documentation](../pages-for-subheaders/rancher-security.md). diff --git a/versioned_docs/version-2.6/integrations-in-rancher/opa-gatekeeper.md b/versioned_docs/version-2.6/integrations-in-rancher/opa-gatekeeper.md index ee0a382a44a4..a75539895356 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/opa-gatekeeper.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/opa-gatekeeper.md @@ -2,6 +2,10 @@ title: OPA Gatekeeper --- + + + + To ensure consistency and compliance, every organization needs the ability to define and enforce policies in its environment in an automated way. [OPA (Open Policy Agent)](https://www.openpolicyagent.org/) is a policy engine that facilitates policy-based control for cloud native environments. Rancher provides the ability to enable OPA Gatekeeper in Kubernetes clusters, and also installs a couple of built-in policy definitions, which are also called constraint templates. OPA provides a high-level declarative language that lets you specify policy as code and ability to extend simple APIs to offload policy decision-making. diff --git a/versioned_docs/version-2.6/reference-guides/kubernetes-concepts.md b/versioned_docs/version-2.6/reference-guides/kubernetes-concepts.md index 631b7cfd118e..707fb8e1c514 100644 --- a/versioned_docs/version-2.6/reference-guides/kubernetes-concepts.md +++ b/versioned_docs/version-2.6/reference-guides/kubernetes-concepts.md @@ -2,6 +2,10 @@ title: Kubernetes Concepts --- + + + + This page explains concepts related to Kubernetes that are important for understanding how Rancher works. The descriptions below provide a simplified overview of Kubernetes components. For more details, refer to the [official documentation on Kubernetes components.](https://kubernetes.io/docs/concepts/overview/components/) ## About Docker diff --git a/versioned_docs/version-2.6/reference-guides/rancher-cluster-tools.md b/versioned_docs/version-2.6/reference-guides/rancher-cluster-tools.md index 41305db554b6..63d8490bc412 100644 --- a/versioned_docs/version-2.6/reference-guides/rancher-cluster-tools.md +++ b/versioned_docs/version-2.6/reference-guides/rancher-cluster-tools.md @@ -2,6 +2,10 @@ title: Cluster Tools for Logging, Monitoring, and Visibility --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. Tools are divided into following categories: diff --git a/versioned_docs/version-2.6/reference-guides/rancher-project-tools.md b/versioned_docs/version-2.6/reference-guides/rancher-project-tools.md index 567cc5cf408c..f199d246d2c3 100644 --- a/versioned_docs/version-2.6/reference-guides/rancher-project-tools.md +++ b/versioned_docs/version-2.6/reference-guides/rancher-project-tools.md @@ -2,6 +2,10 @@ title: Project Tools for Logging, Monitoring, and Visibility --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. diff --git a/versioned_docs/version-2.6/reference-guides/rke1-template-example-yaml.md b/versioned_docs/version-2.6/reference-guides/rke1-template-example-yaml.md index d14e3863ff97..5827dc55fccf 100644 --- a/versioned_docs/version-2.6/reference-guides/rke1-template-example-yaml.md +++ b/versioned_docs/version-2.6/reference-guides/rke1-template-example-yaml.md @@ -2,6 +2,10 @@ title: RKE1 Example YAML --- + + + + Below is an example RKE template configuration file for reference. The YAML in the RKE template uses the same customization that is used when you create an RKE cluster. However, since the YAML is within the context of a Rancher provisioned RKE cluster, the customization from the RKE docs needs to be nested under the `rancher_kubernetes_engine` directive. diff --git a/versioned_docs/version-2.6/reference-guides/system-tools.md b/versioned_docs/version-2.6/reference-guides/system-tools.md index d9480976757f..8c5aa2e441cd 100644 --- a/versioned_docs/version-2.6/reference-guides/system-tools.md +++ b/versioned_docs/version-2.6/reference-guides/system-tools.md @@ -2,6 +2,10 @@ title: System Tools --- + + + + :::note System Tools has been deprecated since June 2022. diff --git a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md index 4d276be3e238..4fead7e7330d 100644 --- a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md +++ b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md @@ -2,6 +2,10 @@ title: Upgrade a Hardened Custom/Imported Cluster to Kubernetes v1.25 --- + + + + Kubernetes v1.25 changes how clusters describe and implement security policies. From this version forward, [Pod Security Policies (PSPs)](https://kubernetes.io/docs/concepts/security/pod-security-policy/) are no longer available. Kubernetes v1.25 replaces them with new security objects: [Pod Security Standards (PSS)](https://kubernetes.io/docs/concepts/security/pod-security-standards/), and [Pod Security Admissions (PSAs)](https://kubernetes.io/docs/concepts/security/pod-security-admission/). If you have custom or imported hardened clusters, you must take special preparations to ensure that the upgrade from an earlier version of Kubernetes to v1.25 or later goes smoothly. diff --git a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/installation-requirements/dockershim.md b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/installation-requirements/dockershim.md index e215e0cfc2e6..211141cb7044 100644 --- a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/installation-requirements/dockershim.md +++ b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/installation-requirements/dockershim.md @@ -2,6 +2,10 @@ title: Dockershim --- + + + + The Dockershim is the CRI compliant layer between the Kubelet and the Docker daemon. As part of the Kubernetes 1.20 release, the [deprecation of the in-tree Dockershim was announced](https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/). For more information on the deprecation and its timelines, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed). RKE clusters now support the external Dockershim to continue leveraging Docker as the CRI runtime. We now implement the upstream open source community external Dockershim announced by [Mirantis and Docker](https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/) to ensure RKE clusters can continue to leverage Docker. diff --git a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md index 3c7c49004432..53bbdc4e9cc7 100644 --- a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md +++ b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md @@ -2,6 +2,10 @@ title: Docker Install Commands --- + + + + The Docker installation is for Rancher users who want to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. diff --git a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md index f90665820049..981223575fec 100644 --- a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md +++ b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md @@ -2,6 +2,10 @@ title: '1. Set up Infrastructure and Private Registry' --- + + + + In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private container image registry that must be available to your Rancher node(s). An air gapped environment is an environment where the Rancher server is installed offline or behind a firewall. diff --git a/versioned_docs/version-2.7/getting-started/quick-start-guides/deploy-rancher-manager/prime.md b/versioned_docs/version-2.7/getting-started/quick-start-guides/deploy-rancher-manager/prime.md index 6177d9e5fbd5..26700be8cf43 100644 --- a/versioned_docs/version-2.7/getting-started/quick-start-guides/deploy-rancher-manager/prime.md +++ b/versioned_docs/version-2.7/getting-started/quick-start-guides/deploy-rancher-manager/prime.md @@ -2,6 +2,10 @@ title: Rancher Prime --- + + + + Rancher v2.7 introduces Rancher Prime, an evolution of the Rancher enterprise offering. Rancher Prime is a new edition of the commercial, enterprise offering built on the the same source code. Rancher’s product will therefore continue to be 100% open source with additional value coming in from security assurances, extended lifecycles, access to focused architectures and Kubernetes advisories. Rancher Prime will also offer options to get production support for innovative Rancher projects. With Rancher Prime, installation assets are hosted on a trusted registry owned and managed by Rancher. To get started with Rancher Prime, [go to this page](https://www.rancher.com/quick-start) and fill out the form. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/configuration-reference.md b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/configuration-reference.md index fa9012e6ed80..0403956be56b 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/configuration-reference.md @@ -2,6 +2,10 @@ title: Configuration --- + + + + This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans, diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/custom-benchmark.md b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/custom-benchmark.md index 8ba63bbe8f7b..47853e45c147 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/custom-benchmark.md @@ -2,6 +2,10 @@ title: Creating a Custom Benchmark Version for Running a Cluster Scan --- + + + + Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the kube-bench tool. The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md index 8a88240d963f..795e64cef29b 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md @@ -2,6 +2,10 @@ title: Roles-based Access Control --- + + + + This section describes the permissions required to use the rancher-cis-benchmark App. The rancher-cis-benchmark is a cluster-admin only feature by default. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md index b11887c76c1c..3920a1588c51 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md @@ -2,6 +2,10 @@ title: Skipped and Not Applicable Tests --- + + + + This section lists the tests that are skipped in the permissive test profile for RKE. > All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md index 42937ea22622..0116f4d9f630 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md @@ -2,6 +2,10 @@ title: Prerequisites --- + + + + ### 1. Setting Up License Manager and Purchasing Support First, complete the [first step](https://docs.aws.amazon.com/license-manager/latest/userguide/getting-started.html) of the license manager one-time setup. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md index 02e6bb86340a..c8f8e51cce76 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md @@ -2,6 +2,10 @@ title: Common Issues --- + + + + **After installing the adapter, a banner message appears in Rancher that says "AWS Marketplace Adapter: Unable to run the adapter, please check the adapter logs"** This error indicates that while the adapter was installed into the cluster, an error has occurred which prevents it from properly checking-in/checking-out licenses. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md index 16e0ac3443ef..51e11fe03606 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md @@ -2,6 +2,10 @@ title: Uninstalling The Adapter --- + + + + ### 1. Uninstall the adapter chart using helm. ```bash diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/supportconfig.md b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/supportconfig.md index 9d87830a82eb..6eecac1132a7 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/supportconfig.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/supportconfig.md @@ -2,6 +2,10 @@ title: Supportconfig bundle --- + + + + After installing the CSP adapter, you will have the ability to generate a supportconfig bundle. This bundle is a tar file which can be used to quickly provide information to support. These bundles can be created through Rancher or through direct access to the cluster that Rancher is installed on. Note that accessing through Rancher is preferred. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/architecture.md b/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/architecture.md index f012a3a9921c..9d64e38de41b 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/architecture.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/architecture.md @@ -2,6 +2,10 @@ title: Architecture --- + + + + Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, or Kustomize or any combination of the three. Regardless of the source, all resources are dynamically turned into Helm charts, and Helm is used as the engine to deploy everything in the cluster. This gives you a high degree of control, consistency, and auditability. Fleet focuses not only on the ability to scale, but to give one a high degree of control and visibility to exactly what is installed on the cluster. ![Architecture](/img/fleet-architecture.svg) diff --git a/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md b/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md index e6a3f8cf9618..6160a19672a3 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md @@ -2,6 +2,10 @@ title: Using Fleet Behind a Proxy --- + + + + In this section, you'll learn how to enable Fleet in a setup that has a Rancher server with a public IP a Kubernetes cluster that has no public IP, but is configured to use a proxy. Rancher does not establish connections with registered downstream clusters. The Rancher agent deployed on the downstream cluster must be able to establish the connection with Rancher. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md b/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md index aea98b74dbc0..f7bf04055f98 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md @@ -2,6 +2,10 @@ title: Windows Support --- + + + + Prior to Rancher v2.5.6, the `agent` did not have native Windows manifests on downstream clusters with Windows nodes. This would result in a failing `agent` pod for the cluster. If you are upgrading from an older version of Rancher to v2.5.6+, you can deploy a working `agent` with the following workflow *in the downstream cluster*: diff --git a/versioned_docs/version-2.7/integrations-in-rancher/harvester.md b/versioned_docs/version-2.7/integrations-in-rancher/harvester.md index 66fc9631970c..300a5826e162 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/harvester.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/harvester.md @@ -2,6 +2,10 @@ title: Harvester Integration --- + + + + Introduced in Rancher v2.6.1, [Harvester](https://docs.harvesterhci.io/) is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require users to know Kubernetes concepts, making it a more user-friendly application. ### Feature Flag diff --git a/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md b/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md index 0c229f288b4a..5bec3737edf1 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md @@ -2,6 +2,10 @@ title: Additional Steps for Installing Istio on RKE2 and K3s Clusters --- + + + + When installing or upgrading the Istio Helm chart through **Apps,** 1. If you are installing the chart, click **Customize Helm options before install** and click **Next**. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/pod-security-policies.md b/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/pod-security-policies.md index e774b97bf8cd..b157cb46fd43 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/pod-security-policies.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/pod-security-policies.md @@ -2,6 +2,10 @@ title: Enable Istio with Pod Security Policies --- + + + + If you have restrictive Pod Security Policies enabled, then Istio may not be able to function correctly, because it needs certain permissions in order to install itself and manage pod infrastructure. In this section, we will configure a cluster with PSPs enabled for an Istio install, and also set up the Istio CNI plugin. The Istio CNI plugin removes the need for each application pod to have a privileged `NET_ADMIN` container. For further information, see the [Istio CNI Plugin docs](https://istio.io/docs/setup/additional-setup/cni). Please note that the [Istio CNI Plugin is in alpha](https://istio.io/about/feature-stages/). diff --git a/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/project-network-isolation.md b/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/project-network-isolation.md index 16fde314183b..f51a033ce3e3 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/project-network-isolation.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/project-network-isolation.md @@ -2,6 +2,10 @@ title: Additional Steps for Project Network Isolation --- + + + + In clusters where: - You are using the Canal network plugin with Rancher before v2.5.8, or you are using Rancher v2.5.8+ with an any RKE network plug-in that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin diff --git a/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md b/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md index 10907f4718cf..29b51149c972 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md @@ -2,6 +2,10 @@ title: Selectors and Scrape Configs --- + + + + The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false`, which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/istio/cpu-and-memory-allocations.md b/versioned_docs/version-2.7/integrations-in-rancher/istio/cpu-and-memory-allocations.md index 37472e328c79..10fe77c9ec4b 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/istio/cpu-and-memory-allocations.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/istio/cpu-and-memory-allocations.md @@ -2,6 +2,10 @@ title: CPU and Memory Allocations --- + + + + This section describes the minimum recommended computing resources for the Istio components in a cluster. The CPU and memory allocations for each component are [configurable.](#configuring-resource-allocations) diff --git a/versioned_docs/version-2.7/integrations-in-rancher/istio/disable-istio.md b/versioned_docs/version-2.7/integrations-in-rancher/istio/disable-istio.md index 052122c4891f..c5f0ae6ce003 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/istio/disable-istio.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/istio/disable-istio.md @@ -2,6 +2,10 @@ title: Disabling Istio --- + + + + This section describes how to uninstall Istio in a cluster or disable a namespace, or workload. ## Uninstall Istio in a Cluster diff --git a/versioned_docs/version-2.7/integrations-in-rancher/istio/rbac-for-istio.md b/versioned_docs/version-2.7/integrations-in-rancher/istio/rbac-for-istio.md index e33bdb725403..b92096b6c9b5 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/istio/rbac-for-istio.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/istio/rbac-for-istio.md @@ -2,6 +2,10 @@ title: Role-based Access Control --- + + + + This section describes the permissions required to access Istio features. The rancher istio chart installs three `ClusterRoles` diff --git a/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md b/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md index e4020351ab15..d6d2ccd67e27 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md @@ -2,6 +2,10 @@ title: Flows and ClusterFlows --- + + + + See the [Logging operator documentation](https://kube-logging.github.io/docs/configuration/flow/) for the full details on how to configure `Flows` and `ClusterFlows`. See [Rancher Integration with Logging Services: Troubleshooting](../../../pages-for-subheaders/logging.md#The-Logging-Buffer-Overloads-Pods) for how to resolve memory problems with the logging buffer. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md b/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md index aa558385bf02..3ae66c9145a5 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md @@ -2,6 +2,10 @@ title: Outputs and ClusterOutputs --- + + + + See the [Logging operator documentation](https://kube-logging.github.io/docs/configuration/flow/) for the full details on how to configure `Flows` and `ClusterFlows`. See [Rancher Integration with Logging Services: Troubleshooting](../../../pages-for-subheaders/logging.md#The-Logging-Buffer-Overloads-Pods) for how to resolve memory problems with the logging buffer. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-architecture.md b/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-architecture.md index 560235f4f7c1..958bc5d30695 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-architecture.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-architecture.md @@ -2,6 +2,10 @@ title: Architecture --- + + + + This section summarizes the architecture of the Rancher logging application. For more details about how the Logging operator works, see the [official documentation.](https://kube-logging.github.io/docs/#architecture) diff --git a/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-helm-chart-options.md b/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-helm-chart-options.md index fd6299abc4d2..643114f6d7cf 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-helm-chart-options.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-helm-chart-options.md @@ -2,6 +2,10 @@ title: rancher-logging Helm Chart Options --- + + + + ### Enable/Disable Windows Node Logging You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/logging/rbac-for-logging.md b/versioned_docs/version-2.7/integrations-in-rancher/logging/rbac-for-logging.md index 627e3533c2cc..e718dce1887d 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/logging/rbac-for-logging.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/logging/rbac-for-logging.md @@ -2,6 +2,10 @@ title: Role-based Access Control for Logging --- + + + + Rancher logging has two roles, `logging-admin` and `logging-view`. - `logging-admin` gives users full access to namespaced `Flows` and `Outputs` diff --git a/versioned_docs/version-2.7/integrations-in-rancher/logging/taints-and-tolerations.md b/versioned_docs/version-2.7/integrations-in-rancher/logging/taints-and-tolerations.md index c5cf0e355783..327cf554fdaa 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/logging/taints-and-tolerations.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/logging/taints-and-tolerations.md @@ -2,6 +2,10 @@ title: Working with Taints and Tolerations --- + + + + "Tainting" a Kubernetes node causes pods to repel running on that node. Unless the pods have a `toleration` for that node's taint, they will run on other nodes in the cluster. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md index 6186565900b1..8c3af531f140 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md @@ -2,6 +2,10 @@ title: Built-in Dashboards --- + + + + ## Grafana UI [Grafana](https://grafana.com/grafana/) allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md index de110cd3d805..bea67b1dc8f3 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md @@ -2,6 +2,10 @@ title: How Monitoring Works --- + + + + ## 1. Architecture Overview _**The following sections describe how data flows through the Monitoring V2 application:**_ diff --git a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md index e301620d92ac..0ea6b134300e 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md @@ -2,6 +2,10 @@ title: PromQL Expression Reference --- + + + + The PromQL expressions in this doc can be used to configure alerts. For more information about querying the Prometheus time series database, refer to the official [Prometheus documentation.](https://prometheus.io/docs/prometheus/latest/querying/basics/) diff --git a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md index 61c9165a54dd..40014bcba442 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md @@ -1,6 +1,11 @@ --- title: Role-based Access Control --- + + + + + This section describes the expectations for RBAC for Rancher Monitoring. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/windows-support.md b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/windows-support.md index 6fa7e1d84d01..8869e2cefe52 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/windows-support.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/windows-support.md @@ -2,6 +2,10 @@ title: Windows Cluster Support for Monitoring V2 --- + + + + _Available as of v2.5.8_ Starting at Monitoring V2 14.5.100 (used by default in Rancher 2.5.8), Monitoring V2 can now be deployed on a Windows cluster and will scrape metrics from Windows nodes using [prometheus-community/windows_exporter](https://github.com/prometheus-community/windows_exporter) (previously named `wmi_exporter`). diff --git a/versioned_docs/version-2.7/integrations-in-rancher/neuvector.md b/versioned_docs/version-2.7/integrations-in-rancher/neuvector.md index bac4c2c0849d..fbc5eccec1c5 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/neuvector.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/neuvector.md @@ -2,6 +2,10 @@ title: NeuVector Integration --- + + + + ### NeuVector Integration in Rancher [NeuVector 5.x](https://open-docs.neuvector.com/) is an open-source container-centric security platform that is integrated with Rancher. NeuVector offers real-time compliance, visibility, and protection for critical applications and data during runtime. NeuVector provides a firewall, container process/file system monitoring, security auditing with CIS benchmarks, and vulnerability scanning. For more information on Rancher security, please see the [security documentation](../pages-for-subheaders/rancher-security.md). diff --git a/versioned_docs/version-2.7/integrations-in-rancher/opa-gatekeeper.md b/versioned_docs/version-2.7/integrations-in-rancher/opa-gatekeeper.md index 26b49d6f8919..f2185dff6855 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/opa-gatekeeper.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/opa-gatekeeper.md @@ -2,6 +2,10 @@ title: OPA Gatekeeper --- + + + + To ensure consistency and compliance, every organization needs the ability to define and enforce policies in its environment in an automated way. [OPA (Open Policy Agent)](https://www.openpolicyagent.org/) is a policy engine that facilitates policy-based control for cloud native environments. Rancher provides the ability to enable OPA Gatekeeper in Kubernetes clusters, and also installs a couple of built-in policy definitions, which are also called constraint templates. OPA provides a high-level declarative language that lets you specify policy as code and ability to extend simple APIs to offload policy decision-making. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/rancher-extensions.md b/versioned_docs/version-2.7/integrations-in-rancher/rancher-extensions.md index 0a86dad7d2da..34929c4bf113 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/rancher-extensions.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/rancher-extensions.md @@ -2,6 +2,10 @@ title: Rancher Extensions --- + + + + New in Rancher v2.7.0, Rancher introduces **extensions**. Extensions allow users, developers, partners, and customers to extend and enhance the Rancher UI. In addition, users can make changes and create enhancements to their UI functionality independent of Rancher releases. Extensions will enable users to build on top of Rancher to better tailor it to their respective environments. Note that users will also have the ability to update to new versions as well as roll back to a previous version. Extensions are Helm charts that can only be installed once into a cluster; therefore, these charts have been simplified and separated from the general Helm charts listed under **Apps**. diff --git a/versioned_docs/version-2.7/reference-guides/kubernetes-concepts.md b/versioned_docs/version-2.7/reference-guides/kubernetes-concepts.md index 631b7cfd118e..707fb8e1c514 100644 --- a/versioned_docs/version-2.7/reference-guides/kubernetes-concepts.md +++ b/versioned_docs/version-2.7/reference-guides/kubernetes-concepts.md @@ -2,6 +2,10 @@ title: Kubernetes Concepts --- + + + + This page explains concepts related to Kubernetes that are important for understanding how Rancher works. The descriptions below provide a simplified overview of Kubernetes components. For more details, refer to the [official documentation on Kubernetes components.](https://kubernetes.io/docs/concepts/overview/components/) ## About Docker diff --git a/versioned_docs/version-2.7/reference-guides/rancher-cluster-tools.md b/versioned_docs/version-2.7/reference-guides/rancher-cluster-tools.md index 41305db554b6..63d8490bc412 100644 --- a/versioned_docs/version-2.7/reference-guides/rancher-cluster-tools.md +++ b/versioned_docs/version-2.7/reference-guides/rancher-cluster-tools.md @@ -2,6 +2,10 @@ title: Cluster Tools for Logging, Monitoring, and Visibility --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. Tools are divided into following categories: diff --git a/versioned_docs/version-2.7/reference-guides/rancher-project-tools.md b/versioned_docs/version-2.7/reference-guides/rancher-project-tools.md index 567cc5cf408c..f199d246d2c3 100644 --- a/versioned_docs/version-2.7/reference-guides/rancher-project-tools.md +++ b/versioned_docs/version-2.7/reference-guides/rancher-project-tools.md @@ -2,6 +2,10 @@ title: Project Tools for Logging, Monitoring, and Visibility --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. diff --git a/versioned_docs/version-2.7/reference-guides/rke1-template-example-yaml.md b/versioned_docs/version-2.7/reference-guides/rke1-template-example-yaml.md index d14e3863ff97..5827dc55fccf 100644 --- a/versioned_docs/version-2.7/reference-guides/rke1-template-example-yaml.md +++ b/versioned_docs/version-2.7/reference-guides/rke1-template-example-yaml.md @@ -2,6 +2,10 @@ title: RKE1 Example YAML --- + + + + Below is an example RKE template configuration file for reference. The YAML in the RKE template uses the same customization that is used when you create an RKE cluster. However, since the YAML is within the context of a Rancher provisioned RKE cluster, the customization from the RKE docs needs to be nested under the `rancher_kubernetes_engine` directive. diff --git a/versioned_docs/version-2.7/reference-guides/system-tools.md b/versioned_docs/version-2.7/reference-guides/system-tools.md index d9480976757f..8c5aa2e441cd 100644 --- a/versioned_docs/version-2.7/reference-guides/system-tools.md +++ b/versioned_docs/version-2.7/reference-guides/system-tools.md @@ -2,6 +2,10 @@ title: System Tools --- + + + + :::note System Tools has been deprecated since June 2022. diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md index 4d276be3e238..4fead7e7330d 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrade-a-hardened-cluster-to-k8s-v1-25.md @@ -2,6 +2,10 @@ title: Upgrade a Hardened Custom/Imported Cluster to Kubernetes v1.25 --- + + + + Kubernetes v1.25 changes how clusters describe and implement security policies. From this version forward, [Pod Security Policies (PSPs)](https://kubernetes.io/docs/concepts/security/pod-security-policy/) are no longer available. Kubernetes v1.25 replaces them with new security objects: [Pod Security Standards (PSS)](https://kubernetes.io/docs/concepts/security/pod-security-standards/), and [Pod Security Admissions (PSAs)](https://kubernetes.io/docs/concepts/security/pod-security-admission/). If you have custom or imported hardened clusters, you must take special preparations to ensure that the upgrade from an earlier version of Kubernetes to v1.25 or later goes smoothly. diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-requirements/dockershim.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-requirements/dockershim.md index e215e0cfc2e6..211141cb7044 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-requirements/dockershim.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-requirements/dockershim.md @@ -2,6 +2,10 @@ title: Dockershim --- + + + + The Dockershim is the CRI compliant layer between the Kubelet and the Docker daemon. As part of the Kubernetes 1.20 release, the [deprecation of the in-tree Dockershim was announced](https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/). For more information on the deprecation and its timelines, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed). RKE clusters now support the external Dockershim to continue leveraging Docker as the CRI runtime. We now implement the upstream open source community external Dockershim announced by [Mirantis and Docker](https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/) to ensure RKE clusters can continue to leverage Docker. diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md index 3c7c49004432..53bbdc4e9cc7 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md @@ -2,6 +2,10 @@ title: Docker Install Commands --- + + + + The Docker installation is for Rancher users who want to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md index f90665820049..981223575fec 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry.md @@ -2,6 +2,10 @@ title: '1. Set up Infrastructure and Private Registry' --- + + + + In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private container image registry that must be available to your Rancher node(s). An air gapped environment is an environment where the Rancher server is installed offline or behind a firewall. diff --git a/versioned_docs/version-2.8/getting-started/quick-start-guides/deploy-rancher-manager/prime.md b/versioned_docs/version-2.8/getting-started/quick-start-guides/deploy-rancher-manager/prime.md index fe009b05c96b..32759ea38650 100644 --- a/versioned_docs/version-2.8/getting-started/quick-start-guides/deploy-rancher-manager/prime.md +++ b/versioned_docs/version-2.8/getting-started/quick-start-guides/deploy-rancher-manager/prime.md @@ -2,6 +2,10 @@ title: Rancher Prime --- + + + + Prime is the Rancher ecosystem’s enterprise offering, with additional security, extended lifecycles, and access to Prime-exclusive documentation. Rancher Prime installation assets are hosted on a trusted SUSE registry, owned and managed by Rancher. The trusted Prime registry includes only stable releases that have been community-tested. Prime also offers options for production support, as well as add-ons to your subscription that tailor to your commercial needs. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/configuration-reference.md b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/configuration-reference.md index fa9012e6ed80..0403956be56b 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/configuration-reference.md @@ -2,6 +2,10 @@ title: Configuration --- + + + + This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans, diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/custom-benchmark.md b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/custom-benchmark.md index 8ba63bbe8f7b..47853e45c147 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/custom-benchmark.md @@ -2,6 +2,10 @@ title: Creating a Custom Benchmark Version for Running a Cluster Scan --- + + + + Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the kube-bench tool. The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md index 8a88240d963f..795e64cef29b 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md @@ -2,6 +2,10 @@ title: Roles-based Access Control --- + + + + This section describes the permissions required to use the rancher-cis-benchmark App. The rancher-cis-benchmark is a cluster-admin only feature by default. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md index b11887c76c1c..3920a1588c51 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md @@ -2,6 +2,10 @@ title: Skipped and Not Applicable Tests --- + + + + This section lists the tests that are skipped in the permissive test profile for RKE. > All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md index 42937ea22622..0116f4d9f630 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements.md @@ -2,6 +2,10 @@ title: Prerequisites --- + + + + ### 1. Setting Up License Manager and Purchasing Support First, complete the [first step](https://docs.aws.amazon.com/license-manager/latest/userguide/getting-started.html) of the license manager one-time setup. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md index 02e6bb86340a..c8f8e51cce76 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues.md @@ -2,6 +2,10 @@ title: Common Issues --- + + + + **After installing the adapter, a banner message appears in Rancher that says "AWS Marketplace Adapter: Unable to run the adapter, please check the adapter logs"** This error indicates that while the adapter was installed into the cluster, an error has occurred which prevents it from properly checking-in/checking-out licenses. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md index 16e0ac3443ef..51e11fe03606 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter.md @@ -2,6 +2,10 @@ title: Uninstalling The Adapter --- + + + + ### 1. Uninstall the adapter chart using helm. ```bash diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/supportconfig.md b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/supportconfig.md index 9d87830a82eb..6eecac1132a7 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/supportconfig.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/supportconfig.md @@ -2,6 +2,10 @@ title: Supportconfig bundle --- + + + + After installing the CSP adapter, you will have the ability to generate a supportconfig bundle. This bundle is a tar file which can be used to quickly provide information to support. These bundles can be created through Rancher or through direct access to the cluster that Rancher is installed on. Note that accessing through Rancher is preferred. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md b/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md index 0c229f288b4a..5bec3737edf1 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md @@ -2,6 +2,10 @@ title: Additional Steps for Installing Istio on RKE2 and K3s Clusters --- + + + + When installing or upgrading the Istio Helm chart through **Apps,** 1. If you are installing the chart, click **Customize Helm options before install** and click **Next**. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/pod-security-policies.md b/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/pod-security-policies.md index e774b97bf8cd..b157cb46fd43 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/pod-security-policies.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/pod-security-policies.md @@ -2,6 +2,10 @@ title: Enable Istio with Pod Security Policies --- + + + + If you have restrictive Pod Security Policies enabled, then Istio may not be able to function correctly, because it needs certain permissions in order to install itself and manage pod infrastructure. In this section, we will configure a cluster with PSPs enabled for an Istio install, and also set up the Istio CNI plugin. The Istio CNI plugin removes the need for each application pod to have a privileged `NET_ADMIN` container. For further information, see the [Istio CNI Plugin docs](https://istio.io/docs/setup/additional-setup/cni). Please note that the [Istio CNI Plugin is in alpha](https://istio.io/about/feature-stages/). diff --git a/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/project-network-isolation.md b/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/project-network-isolation.md index 16fde314183b..f51a033ce3e3 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/project-network-isolation.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/project-network-isolation.md @@ -2,6 +2,10 @@ title: Additional Steps for Project Network Isolation --- + + + + In clusters where: - You are using the Canal network plugin with Rancher before v2.5.8, or you are using Rancher v2.5.8+ with an any RKE network plug-in that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin diff --git a/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md b/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md index 10907f4718cf..29b51149c972 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md @@ -2,6 +2,10 @@ title: Selectors and Scrape Configs --- + + + + The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false`, which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/istio/cpu-and-memory-allocations.md b/versioned_docs/version-2.8/integrations-in-rancher/istio/cpu-and-memory-allocations.md index 37472e328c79..10fe77c9ec4b 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/istio/cpu-and-memory-allocations.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/istio/cpu-and-memory-allocations.md @@ -2,6 +2,10 @@ title: CPU and Memory Allocations --- + + + + This section describes the minimum recommended computing resources for the Istio components in a cluster. The CPU and memory allocations for each component are [configurable.](#configuring-resource-allocations) diff --git a/versioned_docs/version-2.8/integrations-in-rancher/istio/disable-istio.md b/versioned_docs/version-2.8/integrations-in-rancher/istio/disable-istio.md index 052122c4891f..c5f0ae6ce003 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/istio/disable-istio.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/istio/disable-istio.md @@ -2,6 +2,10 @@ title: Disabling Istio --- + + + + This section describes how to uninstall Istio in a cluster or disable a namespace, or workload. ## Uninstall Istio in a Cluster diff --git a/versioned_docs/version-2.8/integrations-in-rancher/istio/rbac-for-istio.md b/versioned_docs/version-2.8/integrations-in-rancher/istio/rbac-for-istio.md index e33bdb725403..b92096b6c9b5 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/istio/rbac-for-istio.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/istio/rbac-for-istio.md @@ -2,6 +2,10 @@ title: Role-based Access Control --- + + + + This section describes the permissions required to access Istio features. The rancher istio chart installs three `ClusterRoles` diff --git a/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md b/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md index e4020351ab15..d6d2ccd67e27 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md @@ -2,6 +2,10 @@ title: Flows and ClusterFlows --- + + + + See the [Logging operator documentation](https://kube-logging.github.io/docs/configuration/flow/) for the full details on how to configure `Flows` and `ClusterFlows`. See [Rancher Integration with Logging Services: Troubleshooting](../../../pages-for-subheaders/logging.md#The-Logging-Buffer-Overloads-Pods) for how to resolve memory problems with the logging buffer. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md b/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md index aa558385bf02..3ae66c9145a5 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs.md @@ -2,6 +2,10 @@ title: Outputs and ClusterOutputs --- + + + + See the [Logging operator documentation](https://kube-logging.github.io/docs/configuration/flow/) for the full details on how to configure `Flows` and `ClusterFlows`. See [Rancher Integration with Logging Services: Troubleshooting](../../../pages-for-subheaders/logging.md#The-Logging-Buffer-Overloads-Pods) for how to resolve memory problems with the logging buffer. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-architecture.md b/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-architecture.md index 560235f4f7c1..958bc5d30695 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-architecture.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-architecture.md @@ -2,6 +2,10 @@ title: Architecture --- + + + + This section summarizes the architecture of the Rancher logging application. For more details about how the Logging operator works, see the [official documentation.](https://kube-logging.github.io/docs/#architecture) diff --git a/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-helm-chart-options.md b/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-helm-chart-options.md index fd6299abc4d2..643114f6d7cf 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-helm-chart-options.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-helm-chart-options.md @@ -2,6 +2,10 @@ title: rancher-logging Helm Chart Options --- + + + + ### Enable/Disable Windows Node Logging You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/logging/rbac-for-logging.md b/versioned_docs/version-2.8/integrations-in-rancher/logging/rbac-for-logging.md index 627e3533c2cc..e718dce1887d 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/logging/rbac-for-logging.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/logging/rbac-for-logging.md @@ -2,6 +2,10 @@ title: Role-based Access Control for Logging --- + + + + Rancher logging has two roles, `logging-admin` and `logging-view`. - `logging-admin` gives users full access to namespaced `Flows` and `Outputs` diff --git a/versioned_docs/version-2.8/integrations-in-rancher/logging/taints-and-tolerations.md b/versioned_docs/version-2.8/integrations-in-rancher/logging/taints-and-tolerations.md index c5cf0e355783..327cf554fdaa 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/logging/taints-and-tolerations.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/logging/taints-and-tolerations.md @@ -2,6 +2,10 @@ title: Working with Taints and Tolerations --- + + + + "Tainting" a Kubernetes node causes pods to repel running on that node. Unless the pods have a `toleration` for that node's taint, they will run on other nodes in the cluster. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md index 63df3f53f3d7..9125e8536b18 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md @@ -2,6 +2,10 @@ title: Built-in Dashboards --- + + + + ## Grafana UI [Grafana](https://grafana.com/grafana/) allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md index de110cd3d805..bea67b1dc8f3 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md @@ -2,6 +2,10 @@ title: How Monitoring Works --- + + + + ## 1. Architecture Overview _**The following sections describe how data flows through the Monitoring V2 application:**_ diff --git a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md index e301620d92ac..0ea6b134300e 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/promql-expressions.md @@ -2,6 +2,10 @@ title: PromQL Expression Reference --- + + + + The PromQL expressions in this doc can be used to configure alerts. For more information about querying the Prometheus time series database, refer to the official [Prometheus documentation.](https://prometheus.io/docs/prometheus/latest/querying/basics/) diff --git a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md index 8a24fbd277d7..7c09d89a87f2 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md @@ -1,6 +1,11 @@ --- title: Role-based Access Control --- + + + + + This section describes the expectations for RBAC for Rancher Monitoring. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/windows-support.md b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/windows-support.md index 6fa7e1d84d01..8869e2cefe52 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/windows-support.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/windows-support.md @@ -2,6 +2,10 @@ title: Windows Cluster Support for Monitoring V2 --- + + + + _Available as of v2.5.8_ Starting at Monitoring V2 14.5.100 (used by default in Rancher 2.5.8), Monitoring V2 can now be deployed on a Windows cluster and will scrape metrics from Windows nodes using [prometheus-community/windows_exporter](https://github.com/prometheus-community/windows_exporter) (previously named `wmi_exporter`). diff --git a/versioned_docs/version-2.8/integrations-in-rancher/opa-gatekeeper.md b/versioned_docs/version-2.8/integrations-in-rancher/opa-gatekeeper.md index 26b49d6f8919..f2185dff6855 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/opa-gatekeeper.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/opa-gatekeeper.md @@ -2,6 +2,10 @@ title: OPA Gatekeeper --- + + + + To ensure consistency and compliance, every organization needs the ability to define and enforce policies in its environment in an automated way. [OPA (Open Policy Agent)](https://www.openpolicyagent.org/) is a policy engine that facilitates policy-based control for cloud native environments. Rancher provides the ability to enable OPA Gatekeeper in Kubernetes clusters, and also installs a couple of built-in policy definitions, which are also called constraint templates. OPA provides a high-level declarative language that lets you specify policy as code and ability to extend simple APIs to offload policy decision-making. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/rancher-extensions.md b/versioned_docs/version-2.8/integrations-in-rancher/rancher-extensions.md index 0a86dad7d2da..34929c4bf113 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/rancher-extensions.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/rancher-extensions.md @@ -2,6 +2,10 @@ title: Rancher Extensions --- + + + + New in Rancher v2.7.0, Rancher introduces **extensions**. Extensions allow users, developers, partners, and customers to extend and enhance the Rancher UI. In addition, users can make changes and create enhancements to their UI functionality independent of Rancher releases. Extensions will enable users to build on top of Rancher to better tailor it to their respective environments. Note that users will also have the ability to update to new versions as well as roll back to a previous version. Extensions are Helm charts that can only be installed once into a cluster; therefore, these charts have been simplified and separated from the general Helm charts listed under **Apps**. diff --git a/versioned_docs/version-2.8/reference-guides/kubernetes-concepts.md b/versioned_docs/version-2.8/reference-guides/kubernetes-concepts.md index 631b7cfd118e..707fb8e1c514 100644 --- a/versioned_docs/version-2.8/reference-guides/kubernetes-concepts.md +++ b/versioned_docs/version-2.8/reference-guides/kubernetes-concepts.md @@ -2,6 +2,10 @@ title: Kubernetes Concepts --- + + + + This page explains concepts related to Kubernetes that are important for understanding how Rancher works. The descriptions below provide a simplified overview of Kubernetes components. For more details, refer to the [official documentation on Kubernetes components.](https://kubernetes.io/docs/concepts/overview/components/) ## About Docker diff --git a/versioned_docs/version-2.8/reference-guides/rancher-cluster-tools.md b/versioned_docs/version-2.8/reference-guides/rancher-cluster-tools.md index 41305db554b6..63d8490bc412 100644 --- a/versioned_docs/version-2.8/reference-guides/rancher-cluster-tools.md +++ b/versioned_docs/version-2.8/reference-guides/rancher-cluster-tools.md @@ -2,6 +2,10 @@ title: Cluster Tools for Logging, Monitoring, and Visibility --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. Tools are divided into following categories: diff --git a/versioned_docs/version-2.8/reference-guides/rancher-project-tools.md b/versioned_docs/version-2.8/reference-guides/rancher-project-tools.md index 567cc5cf408c..f199d246d2c3 100644 --- a/versioned_docs/version-2.8/reference-guides/rancher-project-tools.md +++ b/versioned_docs/version-2.8/reference-guides/rancher-project-tools.md @@ -2,6 +2,10 @@ title: Project Tools for Logging, Monitoring, and Visibility --- + + + + Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. diff --git a/versioned_docs/version-2.8/reference-guides/rke1-template-example-yaml.md b/versioned_docs/version-2.8/reference-guides/rke1-template-example-yaml.md index d14e3863ff97..5827dc55fccf 100644 --- a/versioned_docs/version-2.8/reference-guides/rke1-template-example-yaml.md +++ b/versioned_docs/version-2.8/reference-guides/rke1-template-example-yaml.md @@ -2,6 +2,10 @@ title: RKE1 Example YAML --- + + + + Below is an example RKE template configuration file for reference. The YAML in the RKE template uses the same customization that is used when you create an RKE cluster. However, since the YAML is within the context of a Rancher provisioned RKE cluster, the customization from the RKE docs needs to be nested under the `rancher_kubernetes_engine` directive. diff --git a/versioned_docs/version-2.8/reference-guides/system-tools.md b/versioned_docs/version-2.8/reference-guides/system-tools.md index d9480976757f..8c5aa2e441cd 100644 --- a/versioned_docs/version-2.8/reference-guides/system-tools.md +++ b/versioned_docs/version-2.8/reference-guides/system-tools.md @@ -2,6 +2,10 @@ title: System Tools --- + + + + :::note System Tools has been deprecated since June 2022. From 5864196421b363e6098f023959aba01797a1c8a7 Mon Sep 17 00:00:00 2001 From: Michael Bolot Date: Thu, 30 Nov 2023 09:33:00 -0600 Subject: [PATCH 48/65] Updating CSP Adapter versions --- .../aws-cloud-marketplace/install-adapter.md | 6 +++++- .../aws-cloud-marketplace/install-adapter.md | 6 +++++- .../aws-cloud-marketplace/install-adapter.md | 14 ++++---------- 3 files changed, 14 insertions(+), 12 deletions(-) diff --git a/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md b/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md index 373065dd7e70..73ede34ea42d 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md @@ -18,6 +18,10 @@ In order to deploy and run the adapter successfully, you need to ensure its vers | v2.6.7* | v1.0.1 | | v2.6.8* | v1.0.1 | | v2.6.9 | v1.0.1 | +| v2.6.10 | v1.0.1 | +| v2.6.11 | v1.0.1 | +| v2.6.12 | v1.0.1 | +| v2.6.13 | v1.0.1 | > **Note:** While the adapter can technically be installed on Rancher v2.6.7 and v2.6.8, it is recommended to use version 2.6.9 or higher to avoid unexpected issues @@ -150,4 +154,4 @@ Finally, restart the rancher-csp-adapter deployment to ensure that the updated v kubectl rollout restart deploy rancher-csp-adapter -n cattle-csp-adapter-system ``` -> **Note:** There are methods such as cert-manager's [trust operator](https://cert-manager.io/docs/projects/trust/) which can help reduce the number of manual rotation tasks over time. While these options are not officially supported, they may be useful to users wishing to automate some of these tasks. +> **Note:** Methods such as cert-manager's [trust operator](https://cert-manager.io/docs/projects/trust/) allow you to automate some of these tasks. Although these methods aren't officially supported, they can reduce how often you need to manually rotate certificates. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md index 420e05b2d814..a4240a0156e5 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md @@ -25,6 +25,10 @@ In order to deploy and run the adapter successfully, you need to ensure its vers | v2.7.3 | v2.0.1 | | v2.7.4 | v2.0.1 | | v2.7.5 | v2.0.2 | +| v2.7.6 | v2.0.2 | +| v2.7.7 | v2.0.2 | +| v2.7.8 | v2.0.2 | +| v2.7.9 | v2.0.2 | ### 1. Gain Access to the Local Cluster @@ -156,4 +160,4 @@ Finally, restart the rancher-csp-adapter deployment to ensure that the updated v kubectl rollout restart deploy rancher-csp-adapter -n cattle-csp-adapter-system ``` -> **Note:** There are methods such as cert-manager's [trust operator](https://cert-manager.io/docs/projects/trust/) which can help reduce the number of manual rotation tasks over time. While these options are not officially supported, they may be useful to users wishing to automate some of these tasks. \ No newline at end of file +> **Note:** Methods such as cert-manager's [trust operator](https://cert-manager.io/docs/projects/trust/) allow you to automate some of these tasks. Although these methods aren't officially supported, they can reduce how often you need to manually rotate certificates. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md index 3eb473c9902c..6a325ef79a80 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md @@ -17,15 +17,9 @@ In order to deploy and run the adapter successfully, you need to ensure its vers ::: -| Rancher Version | Adapter Version | -|-----------------|:---------------:| -| v2.7.0 | v2.0.0 | -| v2.7.1 | v2.0.0 | -| v2.7.2 | v2.0.1 | -| v2.7.3 | v2.0.1 | -| v2.7.4 | v2.0.1 | -| v2.7.5 | v2.0.2 | - +| Rancher Version | Adapter Version | +|-----------------|:----------------:| +| v2.8.0 | v103.0.0+up3.0.0 | ### 1. Gain Access to the Local Cluster @@ -156,4 +150,4 @@ Finally, restart the rancher-csp-adapter deployment to ensure that the updated v kubectl rollout restart deploy rancher-csp-adapter -n cattle-csp-adapter-system ``` -> **Note:** There are methods such as cert-manager's [trust operator](https://cert-manager.io/docs/projects/trust/) which can help reduce the number of manual rotation tasks over time. While these options are not officially supported, they may be useful to users wishing to automate some of these tasks. +> **Note:** Methods such as cert-manager's [trust operator](https://cert-manager.io/docs/projects/trust/) allow you to automate some of these tasks. Although these methods aren't officially supported, they can reduce how often you need to manually rotate certificates. From f175eae6b4e1b742405dfae0303637223599eec7 Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Wed, 6 Dec 2023 10:03:58 -0500 Subject: [PATCH 49/65] embed video on aws marketplace page (#1016) --- .../deploy-rancher-manager/aws-marketplace.md | 6 +++++- .../deploy-rancher-manager/aws-marketplace.md | 6 +++++- .../deploy-rancher-manager/aws-marketplace.md | 6 +++++- .../deploy-rancher-manager/aws-marketplace.md | 6 +++++- 4 files changed, 20 insertions(+), 4 deletions(-) diff --git a/docs/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md b/docs/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md index 875025ee424d..a0500a27747b 100644 --- a/docs/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md +++ b/docs/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md @@ -7,4 +7,8 @@ description: Use Amazon EKS to deploy Rancher server. -Amazon Elastic Kubernetes Service (EKS) can quickly [deploy Rancher to Amazon Web Services (AWS)](https://documentation.suse.com/trd/kubernetes/single-html/gs_rancher_aws-marketplace/). To learn more, see our [Amazon Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-go7ent7goo5ae). Watch the [demo](https://youtu.be/9dznJ7Ons0M) for a walkthrough of AWS Marketplace SUSE Rancher setup. +import YouTube from '@site/src/components/YouTube' + +Amazon Elastic Kubernetes Service (EKS) can quickly [deploy Rancher to Amazon Web Services (AWS)](https://documentation.suse.com/trd/kubernetes/single-html/gs_rancher_aws-marketplace/). To learn more, see our [Amazon Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-go7ent7goo5ae). Watch the demo for a walkthrough of AWS Marketplace SUSE Rancher setup: + + diff --git a/versioned_docs/version-2.6/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md b/versioned_docs/version-2.6/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md index 875025ee424d..a0500a27747b 100644 --- a/versioned_docs/version-2.6/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md +++ b/versioned_docs/version-2.6/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md @@ -7,4 +7,8 @@ description: Use Amazon EKS to deploy Rancher server. -Amazon Elastic Kubernetes Service (EKS) can quickly [deploy Rancher to Amazon Web Services (AWS)](https://documentation.suse.com/trd/kubernetes/single-html/gs_rancher_aws-marketplace/). To learn more, see our [Amazon Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-go7ent7goo5ae). Watch the [demo](https://youtu.be/9dznJ7Ons0M) for a walkthrough of AWS Marketplace SUSE Rancher setup. +import YouTube from '@site/src/components/YouTube' + +Amazon Elastic Kubernetes Service (EKS) can quickly [deploy Rancher to Amazon Web Services (AWS)](https://documentation.suse.com/trd/kubernetes/single-html/gs_rancher_aws-marketplace/). To learn more, see our [Amazon Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-go7ent7goo5ae). Watch the demo for a walkthrough of AWS Marketplace SUSE Rancher setup: + + diff --git a/versioned_docs/version-2.7/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md b/versioned_docs/version-2.7/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md index 875025ee424d..a0500a27747b 100644 --- a/versioned_docs/version-2.7/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md +++ b/versioned_docs/version-2.7/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md @@ -7,4 +7,8 @@ description: Use Amazon EKS to deploy Rancher server. -Amazon Elastic Kubernetes Service (EKS) can quickly [deploy Rancher to Amazon Web Services (AWS)](https://documentation.suse.com/trd/kubernetes/single-html/gs_rancher_aws-marketplace/). To learn more, see our [Amazon Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-go7ent7goo5ae). Watch the [demo](https://youtu.be/9dznJ7Ons0M) for a walkthrough of AWS Marketplace SUSE Rancher setup. +import YouTube from '@site/src/components/YouTube' + +Amazon Elastic Kubernetes Service (EKS) can quickly [deploy Rancher to Amazon Web Services (AWS)](https://documentation.suse.com/trd/kubernetes/single-html/gs_rancher_aws-marketplace/). To learn more, see our [Amazon Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-go7ent7goo5ae). Watch the demo for a walkthrough of AWS Marketplace SUSE Rancher setup: + + diff --git a/versioned_docs/version-2.8/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md b/versioned_docs/version-2.8/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md index 875025ee424d..a0500a27747b 100644 --- a/versioned_docs/version-2.8/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md +++ b/versioned_docs/version-2.8/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md @@ -7,4 +7,8 @@ description: Use Amazon EKS to deploy Rancher server. -Amazon Elastic Kubernetes Service (EKS) can quickly [deploy Rancher to Amazon Web Services (AWS)](https://documentation.suse.com/trd/kubernetes/single-html/gs_rancher_aws-marketplace/). To learn more, see our [Amazon Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-go7ent7goo5ae). Watch the [demo](https://youtu.be/9dznJ7Ons0M) for a walkthrough of AWS Marketplace SUSE Rancher setup. +import YouTube from '@site/src/components/YouTube' + +Amazon Elastic Kubernetes Service (EKS) can quickly [deploy Rancher to Amazon Web Services (AWS)](https://documentation.suse.com/trd/kubernetes/single-html/gs_rancher_aws-marketplace/). To learn more, see our [Amazon Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-go7ent7goo5ae). Watch the demo for a walkthrough of AWS Marketplace SUSE Rancher setup: + + From 03ea6163fb53e46c2ecd6e5cb9f2b2b4a1c74721 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 6 Dec 2023 11:48:27 -0800 Subject: [PATCH 50/65] Port version-2.8 updates to latest (/docs) (#1013) * Port version-2.8 updates to latest (/docs) Includes changes from 1b6d9506 (2023-10-06) to 1f39a6ff (2023-11-30) * Fix redirects --- docs/api/api-reference.mdx | 7 + docs/api/quickstart.md | 140 ++++++++++++++++++ docs/api/workflows/projects.md | 109 ++++++++++++++ .../installation-references/feature-flags.md | 2 +- .../port-requirements.md | 2 +- .../deploy-apps-across-clusters/fleet.md | 6 +- .../dynamically-provision-new-storage.md | 2 +- .../set-up-existing-storage.md | 2 +- .../aws-cloud-marketplace/install-adapter.md | 14 +- .../elemental/elemental.md | 27 ++++ docs/integrations-in-rancher/epinio/epinio.md | 22 +++ .../architecture.md | 4 - docs/integrations-in-rancher/fleet/fleet.md | 23 +++ .../fleet/overview.md} | 14 +- .../use-fleet-behind-a-proxy.md | 4 - .../windows-support.md | 4 - .../harvester/harvester.md | 11 ++ .../{harvester.md => harvester/overview.md} | 20 +-- .../integrations-in-rancher.mdx | 66 +++++++++ .../kubernetes-distributions.md | 31 ++++ .../kubewarden/kubewarden.md | 35 +++++ .../longhorn/longhorn.md | 15 ++ .../{longhorn.md => longhorn/overview.md} | 4 +- .../built-in-dashboards.md | 2 +- .../neuvector/neuvector.md | 27 ++++ .../{neuvector.md => neuvector/overview.md} | 6 +- docs/integrations-in-rancher/opni/opni.md | 23 +++ .../rancher-desktop.md | 34 +++++ .../about-provisioning-drivers.md | 2 +- .../create-kubernetes-persistent-storage.md | 2 +- docs/pages-for-subheaders/rancher-security.md | 2 +- .../rancher-webhook-hardening.md | 133 +++++++++++++++++ docusaurus.config.js | 6 +- sidebars.js | 97 +++++++++--- 34 files changed, 810 insertions(+), 88 deletions(-) create mode 100644 docs/api/api-reference.mdx create mode 100644 docs/api/quickstart.md create mode 100644 docs/api/workflows/projects.md create mode 100644 docs/integrations-in-rancher/elemental/elemental.md create mode 100644 docs/integrations-in-rancher/epinio/epinio.md rename docs/integrations-in-rancher/{fleet-gitops-at-scale => fleet}/architecture.md (79%) create mode 100644 docs/integrations-in-rancher/fleet/fleet.md rename docs/{pages-for-subheaders/fleet-gitops-at-scale.md => integrations-in-rancher/fleet/overview.md} (81%) rename docs/integrations-in-rancher/{fleet-gitops-at-scale => fleet}/use-fleet-behind-a-proxy.md (94%) rename docs/integrations-in-rancher/{fleet-gitops-at-scale => fleet}/windows-support.md (84%) create mode 100644 docs/integrations-in-rancher/harvester/harvester.md rename docs/integrations-in-rancher/{harvester.md => harvester/overview.md} (82%) create mode 100644 docs/integrations-in-rancher/integrations-in-rancher.mdx create mode 100644 docs/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md create mode 100644 docs/integrations-in-rancher/kubewarden/kubewarden.md create mode 100644 docs/integrations-in-rancher/longhorn/longhorn.md rename docs/integrations-in-rancher/{longhorn.md => longhorn/overview.md} (96%) create mode 100644 docs/integrations-in-rancher/neuvector/neuvector.md rename docs/integrations-in-rancher/{neuvector.md => neuvector/overview.md} (98%) create mode 100644 docs/integrations-in-rancher/opni/opni.md create mode 100644 docs/integrations-in-rancher/rancher-desktop.md create mode 100644 docs/reference-guides/rancher-security/rancher-webhook-hardening.md diff --git a/docs/api/api-reference.mdx b/docs/api/api-reference.mdx new file mode 100644 index 000000000000..d8674b6e14ff --- /dev/null +++ b/docs/api/api-reference.mdx @@ -0,0 +1,7 @@ +--- +title: API Reference +--- + +import ApiDocMdx from '@theme/ApiDocMdx'; + + \ No newline at end of file diff --git a/docs/api/quickstart.md b/docs/api/quickstart.md new file mode 100644 index 000000000000..4529964d59ab --- /dev/null +++ b/docs/api/quickstart.md @@ -0,0 +1,140 @@ +--- +title: API Quick Start Guide +--- + +You can access Rancher's resources through the Kubernetes API. This guide will help you get started on using this API as a Rancher user. + +1. In the upper left corner, click **☰ > Global Settings**. +2. Find and copy the address in the `server-url` field. +3. [Create](../reference-guides/user-settings/api-keys#creating-an-api-key) a Rancher API key with no scope. + + :::danger + + A Rancher API key with no scope grants unrestricted access to all resources that the user can access. To prevent unauthorized use, this key should be stored securely and rotated frequently. + + ::: + +4. Create a `kubeconfig.yaml` file. Replace `$SERVER_URL` with the server url and `$API_KEY` with your Rancher API key: + + ```yaml + apiVersion: v1 + kind: Config + clusters: + - name: "rancher" + cluster: + server: "$SERVER_URL" + + users: + - name: "rancher" + user: + token: "$API_KEY" + + contexts: + - name: "rancher" + context: + user: "rancher" + cluster: "rancher" + + current-context: "rancher" + ``` + +You can use this file with any compatible tool, such as kubectl or [client-go](https://github.com/kubernetes/client-go). For a quick demo, see the [kubectl example](#api-kubectl-example). + +For more information on handling more complex certificate setups, see [Specifying CA Certs](#specifying-ca-certs). + +For more information on available kubeconfig options, see the [upstream documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). + +## API kubectl Example + +1. Set your KUBECONFIG environment variable to the kubeconfig file you just created: + + ```bash + export KUBECONFIG=$(pwd)/kubeconfig.yaml + ``` + +2. Use `kubectl explain` to view the available fields for projects, or complex sub-fields of resources: + + ```bash + kubectl explain projects + kubectl explain projects.spec + ``` + +Not all resources may have detailed output. + +3. Add the following content to a file named `project.yaml`: + + ```yaml + apiVersion: management.cattle.io/v3 + kind: Project + metadata: + # name should be unique across all projects in every cluster + name: p-abc123 + # generateName can be used instead of `name` to randomly generate a name. + # generateName: p- + # namespace should match spec.ClusterName. + namespace: local + spec: + # clusterName should match `metadata.Name` of the target cluster. + clusterName: local + description: Example Project + # displayName is the human-readable name and is visible from the UI. + displayName: Example + ``` + +4. Create the project: + + ```bash + kubectl create -f project.yaml + ``` + +5. Delete the project: + + How you delete the project depends on how you created the project name. + + **A. If you used `name` when creating the project**: + + ```bash + kubectl delete -f project.yaml + ``` + + **B. If you used `generateName`**: + + Replace `$PROJECT_NAME` with the randomly generated name of the project displayed by Kubectl after you created the project. + + ```bash + kubectl delete project $PROJECT_NAME -n local + ``` + +## Specifying CA Certs + +To ensure that your tools can recognize Rancher's CA certificates, most setups require additional modifications to the above template. + +1. In the upper left corner, click **☰ > Global Settings**. +2. Find and copy the value in the `ca-certs` field. +3. Save the value in a file named `rancher.crt`. + + :::note + If your Rancher instance is proxied by another service, you must extract the certificate that the service is using, and add it to the kubeconfig file, as demonstrated in step 5. + ::: + +4. The following commands will convert `rancher.crt` to base64 output, trim all new-lines, and update the cluster in the kubeconfig with the certificate, then finishing by removing the `rancher.crt` file: + + ```bash + export KUBECONFIG=$PATH_TO_RANCHER_KUBECONFIG + kubectl config set clusters.rancher.certificate-authority-data $(cat rancher.crt | base64 -i - | tr -d '\n') + rm rancher.crt + ``` +5. (Optional) If you use self-signed certificatess that aren't trusted by your system, you can set the insecure option in your kubeconfig with kubectl: + + :::danger + + This option shouldn't be used in production as it is a security risk. + + ::: + + ```bash + export KUBECONFIG=$PATH_TO_RANCHER_KUBECONFIG + kubectl config set clusters.rancher.insecure-skip-tls-verify true + ``` + + If your Rancher instance is proxied by another service, you must extract the certificate that the service is using, and add it to the kubeconfig file, as demonstrated above. diff --git a/docs/api/workflows/projects.md b/docs/api/workflows/projects.md new file mode 100644 index 000000000000..ddc2f8c5aae7 --- /dev/null +++ b/docs/api/workflows/projects.md @@ -0,0 +1,109 @@ +--- +title: Projects +--- + +## Creating a Project + +Project resources may only be created on the management cluster. See below for [creating namespaces under projects in a managed cluster](#creating-a-namespace-in-a-project). + +### Creating a Basic Project + +```bash +kubectl create -f - <:`. + +## Deleting a Project + +Look up the project to delete in the cluster namespace: + +```bash +kubectl --namespace c-m-abcde get projects +``` + +Delete the project under the cluster namespace: + +```bash +kubectl --namespace c-m-abcde delete project p-vwxyz +``` diff --git a/docs/getting-started/installation-and-upgrade/installation-references/feature-flags.md b/docs/getting-started/installation-and-upgrade/installation-references/feature-flags.md index 377d8111523b..176fc2403140 100644 --- a/docs/getting-started/installation-and-upgrade/installation-references/feature-flags.md +++ b/docs/getting-started/installation-and-upgrade/installation-references/feature-flags.md @@ -20,7 +20,7 @@ The following is a list of feature flags available in Rancher. If you've upgrade - `continuous-delivery`: Allows Fleet GitOps to be disabled separately from Fleet. See [Continuous Delivery.](../../../how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery.md) for more information. - `fleet`: The Rancher provisioning framework in v2.6 and later requires Fleet. The flag will be automatically enabled when you upgrade, even if you disabled this flag in an earlier version of Rancher. See [Fleet - GitOps at Scale](../../../how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md) for more information. -- `harvester`: Manages access to the Virtualization Management page, where users can navigate directly to Harvester clusters and access the Harvester UI. See [Harvester Integration](../../../integrations-in-rancher/harvester.md) for more information. +- `harvester`: Manages access to the Virtualization Management page, where users can navigate directly to Harvester clusters and access the Harvester UI. See [Harvester Integration Overview](../../../integrations-in-rancher/harvester/overview.md) for more information. - `istio-virtual-service-ui`: Enables a [visual interface](../../../how-to-guides/advanced-user-guides/enable-experimental-features/istio-traffic-management-features.md) to create, read, update, and delete Istio virtual services and destination rules, which are Istio traffic management features. - `legacy`: Enables a set of features from 2.5.x and earlier, that are slowly being phased out in favor of newer implementations. These are a mix of deprecated features as well as features that will eventually be available to newer versions. This flag is disabled by default on new Rancher installations. If you're upgrading from a previous version of Rancher, this flag is enabled. - `multi-cluster-management`: Allows multi-cluster provisioning and management of Kubernetes clusters. This flag can only be set at install time. It can't be enabled or disabled later. diff --git a/docs/getting-started/installation-and-upgrade/installation-requirements/port-requirements.md b/docs/getting-started/installation-and-upgrade/installation-requirements/port-requirements.md index 73c739499a59..eecd8dd258bb 100644 --- a/docs/getting-started/installation-and-upgrade/installation-requirements/port-requirements.md +++ b/docs/getting-started/installation-and-upgrade/installation-requirements/port-requirements.md @@ -196,7 +196,7 @@ If security isn't a large concern and you're okay with opening a few additional ### Ports for Harvester Clusters -Refer [here](../../../integrations-in-rancher/harvester.md#port-requirements) for more information on Harvester port requirements. +Refer to the [Harvester Integration Overview](../../../integrations-in-rancher/harvester/overview.md#port-requirements) for more information on Harvester port requirements. ### Ports for Rancher Launched Kubernetes Clusters using Node Pools diff --git a/docs/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md b/docs/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md index 8ff8e4cd5bef..e9d4dec7faa2 100644 --- a/docs/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md +++ b/docs/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md @@ -13,7 +13,7 @@ Fleet is a separate project from Rancher, and can be installed on any Kubernetes ## Architecture -For information about how Fleet works, see [this page.](../../../integrations-in-rancher/fleet-gitops-at-scale/architecture.md) +For information about how Fleet works, see [this page.](../../../integrations-in-rancher/fleet/architecture.md) ## Accessing Fleet in the Rancher UI @@ -38,7 +38,7 @@ Follow the steps below to access Continuous Delivery in the Rancher UI: ## Windows Support -For details on support for clusters with Windows nodes, see [this page.](../../../integrations-in-rancher/fleet-gitops-at-scale/windows-support.md) +For details on support for clusters with Windows nodes, see [this page.](../../../integrations-in-rancher/fleet/windows-support.md) ## GitHub Repository @@ -48,7 +48,7 @@ The Fleet Helm charts are available [here.](https://github.com/rancher/fleet/rel ## Using Fleet Behind a Proxy -For details on using Fleet behind a proxy, see [this page.](../../../integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md) +For details on using Fleet behind a proxy, see [this page.](../../../integrations-in-rancher/fleet/use-fleet-behind-a-proxy.md) ## Helm Chart Dependencies diff --git a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index 5ad0b03cd453..77343a9f5c98 100644 --- a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -12,7 +12,7 @@ This section assumes that you understand the Kubernetes concepts of storage clas New storage is often provisioned by a cloud provider such as Amazon EBS. However, new storage doesn't have to be in the cloud. -If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [this page.](../../../../../integrations-in-rancher/longhorn.md) +If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [Cloud Native Storage with Longhorn](../../../../../integrations-in-rancher/longhorn/longhorn.md). To provision new storage for your workloads, follow these steps: diff --git a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md index 60661aea03b2..4be791f5cc3d 100644 --- a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md +++ b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md @@ -31,7 +31,7 @@ Creating a persistent volume in Rancher will not create a storage volume. It onl The steps to set up a persistent storage device will differ based on your infrastructure. We provide examples of how to set up storage using [vSphere,](../../provisioning-storage-examples/vsphere-storage.md) [NFS,](../../provisioning-storage-examples/nfs-storage.md) or Amazon's [EBS.](../../provisioning-storage-examples/persistent-storage-in-amazon-ebs.md) -If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [this page.](../../../../../integrations-in-rancher/longhorn.md) +If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [Cloud Native Storage with Longhorn](../../../../../integrations-in-rancher/longhorn/longhorn.md). ### 2. Add a PersistentVolume that refers to the persistent storage diff --git a/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md b/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md index 420e05b2d814..6a325ef79a80 100644 --- a/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md +++ b/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md @@ -17,15 +17,9 @@ In order to deploy and run the adapter successfully, you need to ensure its vers ::: -| Rancher Version | Adapter Version | -|-----------------|:---------------:| -| v2.7.0 | v2.0.0 | -| v2.7.1 | v2.0.0 | -| v2.7.2 | v2.0.1 | -| v2.7.3 | v2.0.1 | -| v2.7.4 | v2.0.1 | -| v2.7.5 | v2.0.2 | - +| Rancher Version | Adapter Version | +|-----------------|:----------------:| +| v2.8.0 | v103.0.0+up3.0.0 | ### 1. Gain Access to the Local Cluster @@ -156,4 +150,4 @@ Finally, restart the rancher-csp-adapter deployment to ensure that the updated v kubectl rollout restart deploy rancher-csp-adapter -n cattle-csp-adapter-system ``` -> **Note:** There are methods such as cert-manager's [trust operator](https://cert-manager.io/docs/projects/trust/) which can help reduce the number of manual rotation tasks over time. While these options are not officially supported, they may be useful to users wishing to automate some of these tasks. \ No newline at end of file +> **Note:** Methods such as cert-manager's [trust operator](https://cert-manager.io/docs/projects/trust/) allow you to automate some of these tasks. Although these methods aren't officially supported, they can reduce how often you need to manually rotate certificates. diff --git a/docs/integrations-in-rancher/elemental/elemental.md b/docs/integrations-in-rancher/elemental/elemental.md new file mode 100644 index 000000000000..5e93a4b3538b --- /dev/null +++ b/docs/integrations-in-rancher/elemental/elemental.md @@ -0,0 +1,27 @@ +--- +title: Operating System Management with Elemental +--- + +Elemental enables cloud-native host management. Elemental allows you to onboard any machine in any location, whether its in a datacenter or on the edge, and integrate them seamlessly into Kubernetes while managing your workflows (e.g., OS updates). + +## Elemental with Rancher + +Elemental in Rancher: + +- Is Kubernetes native, which allows you to manage the OS via Elemental in Kubernetes clusters. +- Is nondisruptive from a Kubernetes operational perspective. +- Is declarative and GitOps friendly. +- Allows OCI Image-based flows, which are trusted, deterministic, and predictable. +- Works at scale. It enables fleet-sized OS management. + +### When should I use Elemental? + +- Elemental enables cloud-native OS management from Rancher manager. It works with any OS (e.g., SLE Micro vanilla). +- Elemental allows cloud-native management for machines in datacenters and on the edge. +- Elemental is flexible and allows platform teams to perform all kind of workflows across their fleet of machines. + +## Elemental with Rancher Prime + +- Deeply integrated already as GUI Extension in Rancher. +- Extends the Rancher story to the OS. Working perfectly with SLE Micro for Rancher today. + \ No newline at end of file diff --git a/docs/integrations-in-rancher/epinio/epinio.md b/docs/integrations-in-rancher/epinio/epinio.md new file mode 100644 index 000000000000..fe8e4197f906 --- /dev/null +++ b/docs/integrations-in-rancher/epinio/epinio.md @@ -0,0 +1,22 @@ +--- +title: Application Development Engine with Epinio +--- + + + + + +Epinio is a Kubernetes-based Application Development Platform. It helps operators and developers collaborate without conflict, and accelerates the development process. With Epinio, teams can move from application sources to a live URL in a single step. + +## Epinio with Rancher + +Epinio's integration with Rancher gives developers a jump start, without having to deal with the installation process or configuration. You can install Epinio directly from the Rancher UI's Apps page. + +## Epinio with Rancher Prime + +Rancher Prime customers can expect better integration of Epinio with other areas in the Rancher ecosystem such as: + +- Better integration with Rancher authentication. +- Integration with Neuvector and Kubewarden. +- Custom Helm chart templates with preset annotations to seamlessly integrate with monitoring and other key tools. +- Improved service marketplace. diff --git a/docs/integrations-in-rancher/fleet-gitops-at-scale/architecture.md b/docs/integrations-in-rancher/fleet/architecture.md similarity index 79% rename from docs/integrations-in-rancher/fleet-gitops-at-scale/architecture.md rename to docs/integrations-in-rancher/fleet/architecture.md index 9d64e38de41b..f012a3a9921c 100644 --- a/docs/integrations-in-rancher/fleet-gitops-at-scale/architecture.md +++ b/docs/integrations-in-rancher/fleet/architecture.md @@ -2,10 +2,6 @@ title: Architecture --- - - - - Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, or Kustomize or any combination of the three. Regardless of the source, all resources are dynamically turned into Helm charts, and Helm is used as the engine to deploy everything in the cluster. This gives you a high degree of control, consistency, and auditability. Fleet focuses not only on the ability to scale, but to give one a high degree of control and visibility to exactly what is installed on the cluster. ![Architecture](/img/fleet-architecture.svg) diff --git a/docs/integrations-in-rancher/fleet/fleet.md b/docs/integrations-in-rancher/fleet/fleet.md new file mode 100644 index 000000000000..0db52a7e9e17 --- /dev/null +++ b/docs/integrations-in-rancher/fleet/fleet.md @@ -0,0 +1,23 @@ +--- +title: Continuous Delivery with Fleet +--- + +Fleet orchestrates and manages the continuous delivery of applications through the supply chain for fleets of clusters. Fleet organizes the supply chain to help teams deliver with confidence and trust in a timely manner using GitOps as a safe operating model. + +## Fleet with Rancher + +Many users often manage over 10 clusters at a time. Given the proliferation of clusters, continuous delivery is an important part of Rancher. Fleet ensures a reliable continuous delivery experience using GitOps, which is a safe and increasingly common operating model. + +### When should I use Fleet? + +- I need to deploy my monitoring stack (e.g., Grafana, Prometheus) across geographical regions, each with different retention policies. +- I am a platform operator and want to provision clusters with all components using a scalable and safe operating model (GitOps). +- I am an application developer and want to get my latest changes to automatically into my development environment. + +## Fleet with Rancher Prime + +Fleet is already deeply integrated as the Continuous Delivery tool and GitOps Engine in Rancher. + + diff --git a/docs/pages-for-subheaders/fleet-gitops-at-scale.md b/docs/integrations-in-rancher/fleet/overview.md similarity index 81% rename from docs/pages-for-subheaders/fleet-gitops-at-scale.md rename to docs/integrations-in-rancher/fleet/overview.md index 54e38cc48eb4..b7e1806fb586 100644 --- a/docs/pages-for-subheaders/fleet-gitops-at-scale.md +++ b/docs/integrations-in-rancher/fleet/overview.md @@ -1,11 +1,7 @@ --- -title: Continuous Delivery with Fleet +title: Overview --- - - - - Continuous Delivery with Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It’s also lightweight enough that it works great for a [single cluster](https://fleet.rancher.io/installation#default-install) too, but it really shines when you get to a [large scale](https://fleet.rancher.io/installation#configuration-for-multi-cluster). By large scale we mean either a lot of clusters, a lot of deployments, or a lot of teams in a single organization. Fleet is a separate project from Rancher, and can be installed on any Kubernetes cluster with Helm. @@ -13,7 +9,7 @@ Fleet is a separate project from Rancher, and can be installed on any Kubernetes ## Architecture -For information about how Fleet works, see [this page](../integrations-in-rancher/fleet-gitops-at-scale/architecture.md). +For information about how Fleet works, see the [Architecture](./architecture.md) page. ## Accessing Fleet in the Rancher UI @@ -41,7 +37,7 @@ Follow the steps below to access Continuous Delivery in the Rancher UI: ## Windows Support -For details on support for clusters with Windows nodes, see [this page](../integrations-in-rancher/fleet-gitops-at-scale/windows-support.md). +For details on support for clusters with Windows nodes, see the [Windows Support](./windows-support.md) page. ## GitHub Repository @@ -49,7 +45,7 @@ The Fleet Helm charts are available [here](https://github.com/rancher/fleet/rele ## Using Fleet Behind a Proxy -For details on using Fleet behind a proxy, see [this page](../integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md). +For details on using Fleet behind a proxy, see the [Using Fleet Behind a Proxy](./use-fleet-behind-a-proxy.md) page. ## Helm Chart Dependencies @@ -59,7 +55,7 @@ The Helm chart in the git repository must include its dependencies in the charts ## Troubleshooting -- **Known Issue**: clientSecretName and helmSecretName secrets for Fleet gitrepos are not included in the backup nor restore created by the [backup-restore-operator](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md#1-install-the-rancher-backup-operator). We will update the community once a permanent solution is in place. +- **Known Issue**: clientSecretName and helmSecretName secrets for Fleet gitrepos are not included in the backup nor restore created by the [backup-restore-operator](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md#1-install-the-rancher-backup-operator). We will update the community once a permanent solution is in place. - **Temporary Workaround**: By default, user-defined secrets are not backed up in Fleet. It is necessary to recreate secrets if performing a disaster recovery restore or migration of Rancher into a fresh cluster. To modify resourceSet to include extra resources you want to backup, refer to docs [here](https://github.com/rancher/backup-restore-operator#user-flow). diff --git a/docs/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md b/docs/integrations-in-rancher/fleet/use-fleet-behind-a-proxy.md similarity index 94% rename from docs/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md rename to docs/integrations-in-rancher/fleet/use-fleet-behind-a-proxy.md index 6160a19672a3..e6a3f8cf9618 100644 --- a/docs/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy.md +++ b/docs/integrations-in-rancher/fleet/use-fleet-behind-a-proxy.md @@ -2,10 +2,6 @@ title: Using Fleet Behind a Proxy --- - - - - In this section, you'll learn how to enable Fleet in a setup that has a Rancher server with a public IP a Kubernetes cluster that has no public IP, but is configured to use a proxy. Rancher does not establish connections with registered downstream clusters. The Rancher agent deployed on the downstream cluster must be able to establish the connection with Rancher. diff --git a/docs/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md b/docs/integrations-in-rancher/fleet/windows-support.md similarity index 84% rename from docs/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md rename to docs/integrations-in-rancher/fleet/windows-support.md index f7bf04055f98..aea98b74dbc0 100644 --- a/docs/integrations-in-rancher/fleet-gitops-at-scale/windows-support.md +++ b/docs/integrations-in-rancher/fleet/windows-support.md @@ -2,10 +2,6 @@ title: Windows Support --- - - - - Prior to Rancher v2.5.6, the `agent` did not have native Windows manifests on downstream clusters with Windows nodes. This would result in a failing `agent` pod for the cluster. If you are upgrading from an older version of Rancher to v2.5.6+, you can deploy a working `agent` with the following workflow *in the downstream cluster*: diff --git a/docs/integrations-in-rancher/harvester/harvester.md b/docs/integrations-in-rancher/harvester/harvester.md new file mode 100644 index 000000000000..c54b817839bd --- /dev/null +++ b/docs/integrations-in-rancher/harvester/harvester.md @@ -0,0 +1,11 @@ +--- +title: Virtualization on Kubernetes with Harvester +--- + +## Harvester + +Introduced in Rancher v2.6.1, Harvester is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require knowledge of Kubernetes concepts, making it more user-friendly. + +## Harvester with Rancher + +With Rancher Prime and Harvester, IT operators now have access to an enterprise-ready, simple-to-use infrastructure platform that cohesively manages their virtual machines and Kubernetes clusters alongside one another. For more information on the support offering, see the [Support Matrix](https://www.suse.com/suse-harvester/support-matrix/all-supported-versions/harvester-v1-2-0/). With the Rancher Virtualization Management feature, users can import and manage multiple Harvester clusters. Leveraging the Rancher's authentication feature and RBAC control for multi-tenancy support. diff --git a/docs/integrations-in-rancher/harvester.md b/docs/integrations-in-rancher/harvester/overview.md similarity index 82% rename from docs/integrations-in-rancher/harvester.md rename to docs/integrations-in-rancher/harvester/overview.md index 300a5826e162..55a9f5b16ac4 100644 --- a/docs/integrations-in-rancher/harvester.md +++ b/docs/integrations-in-rancher/harvester/overview.md @@ -1,16 +1,12 @@ --- -title: Harvester Integration +title: Overview --- - - - - Introduced in Rancher v2.6.1, [Harvester](https://docs.harvesterhci.io/) is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require users to know Kubernetes concepts, making it a more user-friendly application. ### Feature Flag -The Harvester feature flag is used to manage access to the Virtualization Management (VM) page in Rancher where users can navigate directly to Harvester clusters and access the Harvester UI. The Harvester feature flag is enabled by default. Click [here](../pages-for-subheaders/enable-experimental-features.md) for more information on feature flags in Rancher. +The Harvester feature flag is used to manage access to the Virtualization Management (VM) page in Rancher where users can navigate directly to Harvester clusters and access the Harvester UI. The Harvester feature flag is enabled by default. Click [here](../../pages-for-subheaders/enable-experimental-features.md) for more information on feature flags in Rancher. To navigate to the Harvester cluster, click **☰ > Virtualization Management**. From Harvester Clusters page, click one of the clusters listed to go to the single Harvester cluster view. @@ -28,7 +24,7 @@ The [Harvester node driver](https://docs.harvesterhci.io/v1.1/rancher/node/node- Harvester allows `.ISO` images to be uploaded and displayed through the Harvester UI, but this is not supported in the Rancher UI. This is because `.ISO` images usually require additional setup that interferes with a clean deployment (without requiring user intervention), and they are not typically used in cloud environments. -Click [here](../pages-for-subheaders/about-provisioning-drivers.md#node-drivers) for more information on node drivers in Rancher. +See [Provisioning Drivers](../../pages-for-subheaders/about-provisioning-drivers.md#node-drivers) for more information on node drivers in Rancher. ### Port Requirements @@ -40,13 +36,3 @@ In addition, other networking considerations are as follows: - Follow the networking setup guidance [here](https://docs.harvesterhci.io/v1.1/networking/index). For other port requirements for other guest clusters, such as K3s and RKE1, please see [these docs](https://docs.harvesterhci.io/v1.1/install/requirements/#guest-clusters). - -### Limitations - ---- -**Applicable to Rancher v2.6.1 and v2.6.2 only:** - -- Harvester v0.3.0 doesn’t support air-gapped environment installation. -- Harvester v0.3.0 doesn’t support upgrade from v0.2.0 nor upgrade to v1.0.0. - ---- \ No newline at end of file diff --git a/docs/integrations-in-rancher/integrations-in-rancher.mdx b/docs/integrations-in-rancher/integrations-in-rancher.mdx new file mode 100644 index 000000000000..f8420b327522 --- /dev/null +++ b/docs/integrations-in-rancher/integrations-in-rancher.mdx @@ -0,0 +1,66 @@ +--- +title: Integrations in Rancher +--- +import {Card, CardSection} from '@site/src/components/CardComponents'; +import { + ReadingModeMobileRegular, + QuestionRegular, + ArrowUpRegular, + PlayRegular, + FlowchartRegular, + RocketRegular +} from '@fluentui/react-icons'; +import { FaAws, FaGoogle, FaCloud, FaServer, faGear } from "react-icons/fa6"; +import HarvesterIcon from '@site/static/img/harvester_logo_horizontal.svg'; + +Prime is the Rancher ecosystem’s enterprise offering, with additional security, extended lifecycles, and access to Prime-exclusive documentation. Rancher Prime installation assets are hosted on a trusted SUSE registry, owned and managed by Rancher. The trusted Prime registry includes only stable releases that have been community-tested. + +Prime also offers options for production support, as well as add-ons to your subscription that tailor to your commercial needs. + +To learn more and get started with Rancher Prime, please visit [this page](https://www.rancher.com/quick-start). + +} +> + + + + + + + + + + + diff --git a/docs/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md b/docs/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md new file mode 100644 index 000000000000..b8e6a48e3252 --- /dev/null +++ b/docs/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md @@ -0,0 +1,31 @@ +--- +title: Kubernetes Distributions +--- + +## K3s + +K3s is a lightweight, fully compliant Kubernetes distribution designed for a range of use cases, including edge computing, IoT, CI/CD, development and embedding Kubernetes into applications. It simplifies Kubernetes management by packaging the system as a single binary, using sqlite3 as the default storage, and offering a user-friendly launcher. K3s includes essential features like local storage and load balancing, Helm chart controller and the Traefik CNI. It minimizes external dependencies and provides a streamlined Kubernetes experience. K3s was donated to the CNCF as a Sandbox Project in June 2020. + +### K3s with Rancher + +- Rancher allows easy provision of K3s across a range of platforms including Amazon EC2, DigitalOcean, Azure, vSphere, or existing servers. +- Standard Rancher management of Kubernetes clusters including all outlined [cluster management capabilities](../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md#cluster-management-capabilities-by-cluster-type). + + +## RKE2 + +RKE2 is a compliant Kubernetes distribution developed by Rancher. It is specifically designed for security and compliance within the U.S. Federal Government sector. + +Primary characteristics of RKE2 include: + +1. **Security and Compliance Focus**: RKE2 places a strong emphasis on security and compliance, operating under a "secure by default" framework, making it suitable for government services and highly regulated industries like finance and healthcare. +1. **CIS Kubernetes Benchmark Conformance**: RKE2 comes pre-configured to meet the CIS Kubernetes Hardening Benchmark (currently supporting v1.23 and v1.7), with minimal manual intervention required. +1. **FIPS 140-2 Compliance**: RKE2 complies with the FIPS 140-2 standard using FIPS-validated crypto modules for its components. +1. **Embedded etcd**: RKE2 defaults to using an embedded etcd as its data store. This aligns it more closely with standard Kubernetes practices, allowing better integration with other Kubernetes tools and reducing the risk of misconfiguration. +1. **Alignment with Upstream Kubernetes**: RKE2 aims to stay closely aligned with upstream Kubernetes, reducing the risk of non-conformance that may occur when using distributions that deviate from standard Kubernetes practices. +1. **Multiple CNI Support**: RKE2 offers support for multiple Container Network Interface (CNI) plugins, including Cilium, Calico, and Multus. This is essential for use cases such as telco distribution centers and factories with various production facilities. + +## RKE2 with Rancher + +- Rancher allows easy provision of RKE2 across a range of platforms including Amazon EC2, DigitalOcean, Azure, vSphere, or existing servers. +- Standard Rancher management of Kubernetes clusters including all outlined [cluster management capabilities](../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md#cluster-management-capabilities-by-cluster-type). diff --git a/docs/integrations-in-rancher/kubewarden/kubewarden.md b/docs/integrations-in-rancher/kubewarden/kubewarden.md new file mode 100644 index 000000000000..7a6ee36b3089 --- /dev/null +++ b/docs/integrations-in-rancher/kubewarden/kubewarden.md @@ -0,0 +1,35 @@ +--- +title: Advanced Policy Management with Kubewarden +--- + + + + + +Kubewarden is a Policy Engine that secures and helps manage your cluster resources. It allows for validation and mutation of resource requests via policies, including context-aware policies and verifying image signatures. It can run policies in monitor or enforcing mode and provides an overview of the state of the cluster. + +Kubewarden aims to be the Universal Policy Engine by enabling and simplifying Policy as Code. Kubewarden policies are compiled into WebAssembly: they are small (400KBs ~ 2MBs), sandboxed, secure, and portable. It aims to be universal by catering to each persona in your organization: + +- Policy User: manage and declare policies using Kubernetes Custom Resources, reuse existing policies written in Rego (OPA and Gatekeeper). Test the policies outside the cluster in CI/CD. +- Policy Developer: write policies in your preferred Wasm-compiling language (Rego, Go, Rust, C#, Swift, Typescript, and more to come). Reuse the ecosystem of tools, libraries, and workflows you already know. +- Policy Distributor: policies are OCI artifacts, serve them through your OCI repository and use industry standards in your infrastructure, like Software-Bill-Of-Materials and artifact signatures. +- Cluster Operator: Kubewarden is modular (OCI registry, PolicyServers, Audit Scanner, Controller). Configure your deployment to suit your needs, segregating different tenants. Get an overview of past, current, and possible violations across the cluster with the Audit Scanner and the PolicyReports. +- Kubewarden Integrator: use it as a platform to write new Kubewarden modules and custom policies. + +## Kubewarden with Rancher + +Kubewarden’s upstream Helm charts are fully integrated as Rancher Apps, providing a UI for the install options. The charts also come with defaults that respect the Rancher stack (for example: not policing Rancher system namespaces), and default PolicyServer and Policies. Users have access to all Kubewarden features and can deploy PolicyServers and Policies manually by interacting with the Kubernetes API (e.g.: using kubectl). + +Kubewarden provides a full replacement of the removed Kubernetes Pod Security Policies. Kubewarden also integrates with the new Pod Security Admission feature introduced by a recent version of Kubernetes by augmenting its security capabilities. + +## Kubewarden with Rancher Prime + +The available Rancher UI Extension for Kubewarden integrates it into the Rancher UI. The UI Extension automates the installation and configuration of the Kubewarden stack and configures access to the policies maintained by SUSE. The UI Extension provides access to a curated catalog of ready-to-use policies. Using the UI Extension, one can browse, install, and configure these policies. + +The UI Extension provides an overview of the Kubewarden stack components and their behavior. This includes access to the Kubewarden metrics and trace events. An operator can understand the impact of policies on the cluster and troubleshoot issues. + +In addition, the UI Extension provides the Policy Reporter UI, which gives a visual overview of the compliance status of the Kubernetes cluster. With this UI, an operator can quickly identify all non-compliant Kubernetes resources, understand the reasons for violations and act accordingly. +All of this with the support offering of Rancher Prime. + + + \ No newline at end of file diff --git a/docs/integrations-in-rancher/longhorn/longhorn.md b/docs/integrations-in-rancher/longhorn/longhorn.md new file mode 100644 index 000000000000..e6f656809321 --- /dev/null +++ b/docs/integrations-in-rancher/longhorn/longhorn.md @@ -0,0 +1,15 @@ +--- +title: Cloud Native Storage with Longhorn +--- + + + + + +## Longhorn + +Longhorn is an official [Cloud Native Computing Foundation project (CNCF)](https://cncf.io/) project that delivers a powerful cloud-native distributed storage platform for Kubernetes that can run anywhere. When combined with Rancher, Longhorn makes the deployment of highly available persistent block storage in your Kubernetes environment easy, fast and reliable. + +## Longhorn with Rancher + +With Rancher Prime and Longhorn, users can easily deploy with 1-click via the Rancher catalog and conduct lifecycle management for managed clusters; empowering the user to install and upgrade, together with draining operation for graceful operations. Longhorn with Rancher also provides mixed cluster support with Windows, Rancher hosted images, UI Proxy access through Rancher, and Rancher monitoring with Longhorn metrics. diff --git a/docs/integrations-in-rancher/longhorn.md b/docs/integrations-in-rancher/longhorn/overview.md similarity index 96% rename from docs/integrations-in-rancher/longhorn.md rename to docs/integrations-in-rancher/longhorn/overview.md index d9daf5421a5a..db7e4a620761 100644 --- a/docs/integrations-in-rancher/longhorn.md +++ b/docs/integrations-in-rancher/longhorn/overview.md @@ -1,9 +1,9 @@ --- -title: Longhorn - Cloud native distributed block storage for Kubernetes +title: Overview --- - + [Longhorn](https://longhorn.io/) is a lightweight, reliable, and easy-to-use distributed block storage system for Kubernetes. diff --git a/docs/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md b/docs/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md index 0719a2be2362..9125e8536b18 100644 --- a/docs/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md +++ b/docs/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards.md @@ -118,4 +118,4 @@ For more information on configuring PrometheusRules in Rancher, see [this page.] ## Legacy UI -For information on the dashboards available in v2.2 to v2.4 of Rancher, before the introduction of the `rancher-monitoring` application, see the [Rancher v2.0—v2.4 docs](../../../versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cluster-monitoring/viewing-metrics.md). +For information on the dashboards available in v2.2 to v2.4 of Rancher, before the introduction of the `rancher-monitoring` application, see the [Rancher v2.0—v2.4 docs](/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cluster-monitoring/viewing-metrics.md). diff --git a/docs/integrations-in-rancher/neuvector/neuvector.md b/docs/integrations-in-rancher/neuvector/neuvector.md new file mode 100644 index 000000000000..4d823368dee6 --- /dev/null +++ b/docs/integrations-in-rancher/neuvector/neuvector.md @@ -0,0 +1,27 @@ +--- +title: Container Security with NeuVector +--- + + + + + +NeuVector is the only 100% open source, Zero Trust container security platform. Continuously scan throughout the container lifecycle. Remove security roadblocks. Bake in security policies at the start to maximize developer agility. NeuVector provides vulnerability and compliance scanning and management from build to production. The unique NeuVector run-time protection protects network connections within and ingress/egress to the cluster with a Layer7 container firewall. Additionally, NeuVector monitors process and file activity in containers and on hosts to stop unauthorized activity. + +## NeuVector with Rancher + +All NeuVector features are available through Rancher with integrated deployment and single-sign on to the NeuVector console. Rancher cluster admins are able to deploy and manage the NeuVector deployment on their clusters and easily configure NeuVector through Helm values, configMaps, custom resource definitions (CRDs) and the NeuVector console. + +With NeuVector and Rancher: + +- Deploy, manage and secure multiple clusters. +- Manage and report vulnerabilities and compliance results for Rancher workloads and nodes. + +## NeuVector Prime with Rancher Prime + +The NeuVector UI Extension for Rancher Manager is available and supported for Rancher Prime and NeuVector Prime customers. This extension provides: + +- Automated deployment of NeuVector, including the Rancher Prime NeuVector Extension dashboard. +- Access to important security information from each cluster, such as critical security events, vulnerability scan results, and ingress/egress exposures. +- Integrated vulnerability (CVE) and compliance scan results directly in Rancher resources such as nodes and containers/pods. +- Integrated actions such as manual triggers of scans on Rancher resources. diff --git a/docs/integrations-in-rancher/neuvector.md b/docs/integrations-in-rancher/neuvector/overview.md similarity index 98% rename from docs/integrations-in-rancher/neuvector.md rename to docs/integrations-in-rancher/neuvector/overview.md index fbc5eccec1c5..e2701265fc6f 100644 --- a/docs/integrations-in-rancher/neuvector.md +++ b/docs/integrations-in-rancher/neuvector/overview.md @@ -1,14 +1,14 @@ --- -title: NeuVector Integration +title: Overview --- - + ### NeuVector Integration in Rancher -[NeuVector 5.x](https://open-docs.neuvector.com/) is an open-source container-centric security platform that is integrated with Rancher. NeuVector offers real-time compliance, visibility, and protection for critical applications and data during runtime. NeuVector provides a firewall, container process/file system monitoring, security auditing with CIS benchmarks, and vulnerability scanning. For more information on Rancher security, please see the [security documentation](../pages-for-subheaders/rancher-security.md). +[NeuVector 5.x](https://open-docs.neuvector.com/) is an open-source container-centric security platform that is integrated with Rancher. NeuVector offers real-time compliance, visibility, and protection for critical applications and data during runtime. NeuVector provides a firewall, container process/file system monitoring, security auditing with CIS benchmarks, and vulnerability scanning. For more information on Rancher security, please see the [security documentation](../../pages-for-subheaders/rancher-security.md). NeuVector can be enabled through a Helm chart that may be installed either through **Apps** or through the **Cluster Tools** button in the Rancher UI. Once the Helm chart is installed, users can easily [deploy and manage NeuVector clusters within Rancher](https://open-docs.neuvector.com/deploying/rancher#deploy-and-manage-neuvector-through-rancher-apps-marketplace). diff --git a/docs/integrations-in-rancher/opni/opni.md b/docs/integrations-in-rancher/opni/opni.md new file mode 100644 index 000000000000..96ec3eb73364 --- /dev/null +++ b/docs/integrations-in-rancher/opni/opni.md @@ -0,0 +1,23 @@ +--- +title: Observability with Opni +--- + + + + + +Opni is a multi-cluster and multi-tenant observability platform. Purpose-built on Kubernetes, Opni simplifies the process of creating and managing backends, agents, and data related to logging, monitoring, and tracing. With built-in AIOps, Opni allows users to swiftly detect anomalous activities in their data. + +Opni components work together to provide a comprehensive observability platform. Key components include: + +- Observability Backends: Opni Logging enhances Opensearch for easy searching, visualization, and analysis of logs, traces and Kubernetes events. Opni Monitoring extends Cortex for multi-cluster, long-term storage of Prometheus metrics. +- Observability Agents: Agents are software that collects observability data (logs, metrics, traces, and events) from their host and sends it to an observability backend. The Opni agent enables collection of logs, Kubernetes events, OpenTelemetry traces, and Prometheus metrics. +- AIOps: Applies AL and machine learning to IT and observability data. Open AIOps features include log anomaly detection using pretrained models for Kubernetes control plane, Rancher and Longhorn. +- Alerting and SLOs: Triggers and reliability targets for services enables utilizing Opni data to effectively make informed decisions regarding software operations. + +## Opni with Rancher + +Opni’s Helm charts are currently maintained in a charts-specific branch of the Opni GitHub project. Once this branch is added as a repository in Rancher, the Opni installation can be performed through the Rancher UI. Efforts are underway now to streamline this process by including these charts directly within Rancher itself, and offering Opni as a fully integrated Rancher App. + +Opni’s log anomaly detection process includes purpose-built, pre-trained models for RKE2, K3s, Longhorn and Rancher agent logs. This advanced modeling ensures first class support for log anomaly detection for the core suite of Rancher products. + diff --git a/docs/integrations-in-rancher/rancher-desktop.md b/docs/integrations-in-rancher/rancher-desktop.md new file mode 100644 index 000000000000..a790e814c0e2 --- /dev/null +++ b/docs/integrations-in-rancher/rancher-desktop.md @@ -0,0 +1,34 @@ +--- +title: Kubernetes on the Desktop with Rancher Desktop +--- + + + + + + +Rancher Desktop bundles together essential tools for developing and testing cloud-native applications from your desktop. + +If you're working from your local machine on apps intended for cloud environments, you normally need a lot of preparation. You need to select a container run-time, install Kubernetes and popular utilities, and possibly set up a virtual machine. Installing components individually and getting them to work together can be a time-consuming process. + +To reduce the complexity, Rancher Desktop offers teams the following key features: + +- Simple and easy installation on macOS, Linux and Windows operating systems. +- K3s, a ready-to-use, light-weight Kubernetes distribution. +- The ability to easily switch between Kubernetes versions. +- A GUI-based cluster dashboard powered by Rancher to explore your local cluster. +- Freedom to choose your container engine: dockerd (moby) or containerd. +- Preference settings to configure the application to suit your needs. +- Bundled tools required for your container, for Kubernetes-based development, and for operation workflows. +- Periodic updates to keep bundled tools up to date. +- Integration with popular tools/IDEs, including VS Code and Skaffold. +- Image & Registry access control. +- Support for Docker extensions. + +Visit the [Rancher Desktop](https://rancherdesktop.io) website and read the [docs](https://docs.rancherdesktop.io/) to learn more. + +To install Rancher Desktop on your machine, refer to the [installation guide](https://docs.rancherdesktop.io/getting-started/installation). + +## Trying Rancher on Rancher Desktop + +Rancher Desktop offers the setup and tools you need to easily try out containerized, Helm-based applications. You can get started with the Rancher Kubernetes Management platform using Rancher Desktop, by following this [how-to guide](https://docs.rancherdesktop.io/how-to-guides/rancher-on-rancher-desktop). diff --git a/docs/pages-for-subheaders/about-provisioning-drivers.md b/docs/pages-for-subheaders/about-provisioning-drivers.md index 812197b3b3f8..1e129210c4bc 100644 --- a/docs/pages-for-subheaders/about-provisioning-drivers.md +++ b/docs/pages-for-subheaders/about-provisioning-drivers.md @@ -48,4 +48,4 @@ Rancher supports several major cloud providers, but by default, these node drive There are several other node drivers that are disabled by default, but are packaged in Rancher: -* [Harvester](../integrations-in-rancher/harvester.md#harvester-node-driver/), available in Rancher v2.6.1 +* [Harvester](../integrations-in-rancher/harvester/overview.md#harvester-node-driver/), available as of Rancher v2.6.1 diff --git a/docs/pages-for-subheaders/create-kubernetes-persistent-storage.md b/docs/pages-for-subheaders/create-kubernetes-persistent-storage.md index 6bda26af36e7..58fcb58c1cb6 100644 --- a/docs/pages-for-subheaders/create-kubernetes-persistent-storage.md +++ b/docs/pages-for-subheaders/create-kubernetes-persistent-storage.md @@ -50,7 +50,7 @@ Longhorn is free, open source software. Originally developed by Rancher Labs, it If you have a pool of block storage, Longhorn can help you provide persistent storage to your Kubernetes cluster without relying on cloud providers. For more information about Longhorn features, refer to the [documentation.](https://longhorn.io/docs/latest/what-is-longhorn/) -Rancher v2.5 simplified the process of installing Longhorn on a Rancher-managed cluster. For more information, see [this page.](../integrations-in-rancher/longhorn.md) +Rancher v2.5 simplified the process of installing Longhorn on a Rancher-managed cluster. For more information, see [Cloud Native Storage with Longhorn](../integrations-in-rancher/longhorn/longhorn.md). ### Provisioning Storage Examples diff --git a/docs/pages-for-subheaders/rancher-security.md b/docs/pages-for-subheaders/rancher-security.md index 67c496fe24dc..81c0c40da247 100644 --- a/docs/pages-for-subheaders/rancher-security.md +++ b/docs/pages-for-subheaders/rancher-security.md @@ -29,7 +29,7 @@ On this page, we provide security related documentation along with resources to ### NeuVector Integration with Rancher -NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../integrations-in-rancher/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information. +NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../integrations-in-rancher/neuvector/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information. ### Running a CIS Security Scan on a Kubernetes Cluster diff --git a/docs/reference-guides/rancher-security/rancher-webhook-hardening.md b/docs/reference-guides/rancher-security/rancher-webhook-hardening.md new file mode 100644 index 000000000000..0362deecc5cd --- /dev/null +++ b/docs/reference-guides/rancher-security/rancher-webhook-hardening.md @@ -0,0 +1,133 @@ +--- +title: Hardening the Rancher Webhook +--- + +Rancher Webhook is an important component within Rancher, playing a role in enforcing security requirements for Rancher and its workloads. To decrease its attack surface, access to it should be limited to the only valid caller it has: the Kubernetes API server. This can be done by using network policies and authentication independently or in conjunction with each other to harden the webhook against attacks. + +## Block External Traffic Using Network Policies + +The webhook is only expected to accept requests from the Kubernetes API server. By default, however, the webhook can accept traffic from any source. If you are using a CNI that supports Network Policies, you can create a policy that blocks traffic that doesn't originate from the API server. + +The built-in NetworkPolicy resource in Kubernetes can't block or admit traffic from the cluster hosts, and the `kube-apiserver` process is always running on the host network. Therefore, you must use the advanced network policy resources from the CNI in use. Examples for Calico and Cilium follow. Consult the documentation for your CNI for more details. + +### Calico + +Use the NetworkPolicy resource in the `crd.projectcalico.org/v1` API group. Use the selector `app == 'rancher-webhook'` to create a rule for the webhook, and set the CIDR of the control plane hosts as the ingress source: + +```yaml +apiVersion: crd.projectcalico.org/v1 +kind: NetworkPolicy +metadata: + name: allow-k8s + namespace: cattle-system +spec: + selector: app == 'rancher-webhook' + types: + - Ingress + ingress: + - action: Allow + protocol: TCP + source: + nets: + - 192.168.42.0/24 # CIDR of the control plane host. May list more than 1 if the hosts are in different subnets. + destination: + selector: + app == 'rancher-webhook' +``` + +### Cilium + +Use the CiliumNetworkPolicy resource in the `cilium.io/v2` API group. Add the `host` and `remote-node` keys to the `fromEntities` ingress rule. This blocks in-cluster and external traffic while allowing traffic from the hosts. + +```yaml +apiVersion: "cilium.io/v2" +kind: CiliumNetworkPolicy +metadata: + name: allow-k8s + namespace: cattle-system +spec: + endpointSelector: + matchLabels: + app: rancher-webhook + ingress: + - fromEntities: + - host + - remote-node +``` + +## Require the Kubernetes API Server to Authenticate to the Webhook + +The webhook should only accept requests from the Kubernetes API server. By default, the webhook doesn't require clients to authenticate to it. It will accept any request. You can configure the webhook to require credentials so that only the API server can access it. More information can be found in the [Kubernetes documentation](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#authenticate-apiservers). + +1. Configure the API server to present a client certificate to the webhook, pointing to an AdmissionConfiguration file to configure the ValidatingAdmissionWebhook and MutatingAdmissionWebhook plugins: + + ```yaml + # /etc/rancher/admission/admission.yaml + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: ValidatingAdmissionWebhook + configuration: + apiVersion: apiserver.config.k8s.io/v1 + kind: WebhookAdmissionConfiguration + kubeConfigFile: "/etc/rancher/admission/kubeconfig" + - name: MutatingAdmissionWebhook + configuration: + apiVersion: apiserver.config.k8s.io/v1 + kind: WebhookAdmissionConfiguration + kubeConfigFile: "/etc/rancher/admission/kubeconfig" + ``` + + This is also the same config file where other admission plugins are configured, such as PodSecurity. If your distro or your setup uses additional admission plugins, configure those as well. For example, add [RKE2's PodSecurity configuration](https://docs.rke2.io/security/pod_security_standards) to this file. + +2. Create the kubeconfig file that the admission plugins refer to. Rancher Webhook only supports client certificate authentication, so generate a TLS key pair, and set the kubeconfig to use either `client-certificate` and `client-key` or `client-certificate-data` and `client-key-data`. For example: + + ```yaml + # /etc/rancher/admission/kubeconfig + apiVersion: v1 + kind: Config + users: + - name: 'rancher-webhook.cattle-system.svc' + user: + client-certificate: /path/to/client/cert + client-key: /path/to/client/key + ``` + +3. Start the kube-apiserver binary with the flag `--admission-control-config-file` pointing to your AdmissionConfiguration file. The way to do this varies by distro, and it isn't supported universally, such as in hosted Kubernetes providers. Consult the documentation for your Kubernetes distribution. + + For RKE2, `rke2-server` can be started with a config file like so: + + ```yaml + # /etc/rancher/rke2/config.yaml + kube-apiserver-arg: + - admission-control-config-file=/etc/rancher/admission/admission.yaml + kube-apiserver-extra-mount: + - /etc/rancher/admission:/etc/rancher/admission:ro + ``` + + :::danger + Some distros set this flag by default. If your distro provisions its own AdmissionConfiguration, you must include it in your custom admission control config file. For example, RKE2 installs an AdmissionConfiguration file at `/etc/rancher/rke2/rke2-pss.yaml`, which configures the PodSecurity admission plugin. Setting `admission-control-config-file` in config.yaml will override this essential security setting. To include both plugins, consult [the Default Pod Security Standards documentation](https://docs.rke2.io/security/pod_security_standards) and copy the appropriate plugin configuration to your admission.yaml. + ::: + +4. If you're using Rancher to provision your cluster using existing nodes, create these files on the node before you provision them. + + If you're using Rancher to provision your cluster on new nodes, allow the provisioning to complete, then use the provided SSH key and IP address to connect to the nodes, and place the RKE2 config file in the `/etc/rancher/rke2/config.yaml.d/` directory. + +5. After the cluster is configured with these credentials, configure the Rancher cluster agent to enable authentication in the webhook. Create a file containing these chart values: + + ```yaml + # values.yaml + auth: + clientCA: + allowedCNs: + - + - + ``` + +6. Create a configmap in the `cattle-system` namespace on the provisioned cluster with these values: + + ``` + kubectl --namespace cattle-system create configmap rancher-config --from-file=rancher-webhook=values.yaml + ``` + + The webhook will restart with these values. diff --git a/docusaurus.config.js b/docusaurus.config.js index 6db813113de7..f398240e0098 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -646,15 +646,15 @@ module.exports = { from: '/explanations/integrations-in-rancher/cis-scans/custom-benchmark' }, { - to: '/integrations-in-rancher/fleet-gitops-at-scale/architecture', + to: '/integrations-in-rancher/fleet/architecture', from: '/explanations/integrations-in-rancher/fleet-gitops-at-scale/architecture' }, { - to: '/integrations-in-rancher/fleet-gitops-at-scale/windows-support', + to: '/integrations-in-rancher/fleet/windows-support', from: '/explanations/integrations-in-rancher/fleet-gitops-at-scale/windows-support' }, { - to: '/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy', + to: '/integrations-in-rancher/fleet/use-fleet-behind-a-proxy', from: '/explanations/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy' }, { diff --git a/sidebars.js b/sidebars.js index ce3687e759b7..8aed9ca6eced 100644 --- a/sidebars.js +++ b/sidebars.js @@ -1118,14 +1118,72 @@ const sidebars = { "reference-guides/rancher-security/rancher-security-best-practices", "reference-guides/rancher-security/security-advisories-and-cves", "reference-guides/rancher-security/psa-restricted-exemptions", + "reference-guides/rancher-security/rancher-webhook-hardening" ], } ] }, { - type: 'category', - label: 'Integrations in Rancher', - items: [ + "type": "category", + "label": "Integrations in Rancher", + "link": { + "type": "doc", + "id": "integrations-in-rancher/integrations-in-rancher" + }, + "items": [ + "integrations-in-rancher/kubernetes-distributions/kubernetes-distributions", + { + "type": "category", + "label": "Virtualization on Kubernetes with Harvester", + "link": { + "type": "doc", + "id": "integrations-in-rancher/harvester/harvester" + }, + "items": [ + "integrations-in-rancher/harvester/overview" + ] + }, + { + "type": "category", + "label": "Cloud Native Storage with Longhorn", + "link": { + "type": "doc", + "id": "integrations-in-rancher/longhorn/longhorn" + }, + "items": [ + "integrations-in-rancher/longhorn/overview" + ] + }, + { + "type": "category", + "label": "Container Security with Neuvector", + "link": { + "type": "doc", + "id": "integrations-in-rancher/neuvector/neuvector" + }, + "items": [ + "integrations-in-rancher/neuvector/overview" + ] + }, + "integrations-in-rancher/kubewarden/kubewarden", + "integrations-in-rancher/elemental/elemental", + "integrations-in-rancher/opni/opni", + { + "type": "category", + "label": "Continuous Delivery with Fleet", + "link": { + "type": "doc", + "id": "integrations-in-rancher/fleet/fleet" + }, + "items": [ + "integrations-in-rancher/fleet/overview", + "integrations-in-rancher/fleet/architecture", + "integrations-in-rancher/fleet/windows-support", + "integrations-in-rancher/fleet/use-fleet-behind-a-proxy" + ] + }, + "integrations-in-rancher/rancher-desktop", + "integrations-in-rancher/epinio/epinio", { type: 'category', label: 'Cloud Marketplace Integration', @@ -1165,20 +1223,6 @@ const sidebars = { "integrations-in-rancher/cis-scans/custom-benchmark", ], }, - { - type: 'category', - label: 'Continuous Delivery with Fleet', - link: { - type: 'doc', - id: "pages-for-subheaders/fleet-gitops-at-scale", - }, - items: [ - "integrations-in-rancher/fleet-gitops-at-scale/architecture", - "integrations-in-rancher/fleet-gitops-at-scale/windows-support", - "integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy", - ] - }, - "integrations-in-rancher/harvester", { type: 'category', label: 'Istio', @@ -1206,7 +1250,6 @@ const sidebars = { } ] }, - "integrations-in-rancher/longhorn", { type: 'category', label: 'Logging', @@ -1248,10 +1291,7 @@ const sidebars = { "integrations-in-rancher/monitoring-and-alerting/promql-expressions", ] }, - "integrations-in-rancher/neuvector", - "integrations-in-rancher/opa-gatekeeper", - "integrations-in-rancher/rancher-extensions", ] }, @@ -1305,6 +1345,21 @@ const sidebars = { } ] }, + { + "type": "category", + "label": "Rancher Kubernetes API", + "items": [ + "api/quickstart", + { + "type": "category", + "label": "Example Workflows", + "items": [ + "api/workflows/projects" + ] + }, + "api/api-reference" + ] + }, "contribute-to-rancher", ] } From 3d8918839e5ed4cda0800ad39156995d5cc6132a Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Wed, 6 Dec 2023 14:48:47 -0500 Subject: [PATCH 51/65] #773: Add steps to install Rancher Extensions in an air-gapped environment (#807) * Add steps to install Rancher Extensions in an air-gapped environment * added link from install guide * added notes to import/install steps as suggested by rohitsakala * minor copyedits * updating extensions explicitly given own section * tightened up note * updated note for v2.7 * ensuring all 3 versions have same text * versioning install-rancher-ha * del extension repos + (partially) upgrade * fixed comment syntax * Apply suggestions from code review Co-authored-by: Billy Tat * Apply suggestions from code review * completed update instructions * no dropdown, text input * more explanation + delete * Apply suggestions from code review * extensions repo container image delete * versioning * revert changes to v2.7 --------- Co-authored-by: Billy Tat --- .../install-rancher-ha.md | 1 + .../rancher-extensions.md | 74 ++++++++++++++++--- .../install-rancher-ha.md | 1 + .../rancher-extensions.md | 74 ++++++++++++++++--- 4 files changed, 132 insertions(+), 18 deletions(-) diff --git a/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md b/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md index d09b5eca604c..99c4332b6339 100644 --- a/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md +++ b/docs/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md @@ -245,6 +245,7 @@ If you don't intend to send telemetry data, opt out [telemetry](../../../../faq/ These resources could be helpful when installing Rancher: +- [Importing and installing extensions in an air-gapped environment](../../../../integrations-in-rancher/rancher-extensions.md#importing-and-installing-extensions-in-an-air-gapped-environment) - [Rancher Helm chart options](../../installation-references/helm-chart-options.md) - [Adding TLS secrets](../../resources/add-tls-secrets.md) - [Troubleshooting Rancher Kubernetes Installations](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md) diff --git a/docs/integrations-in-rancher/rancher-extensions.md b/docs/integrations-in-rancher/rancher-extensions.md index 34929c4bf113..2c56082fc534 100644 --- a/docs/integrations-in-rancher/rancher-extensions.md +++ b/docs/integrations-in-rancher/rancher-extensions.md @@ -30,7 +30,7 @@ Examples of built-in Rancher extensions are Fleet, Explorer, and Harvester. Exam :::info -In v2.7.0, the built-in extensions will not be displayed under the **Available** tab. Therefore, you will need to manually add the desired repos to install extensions. We will update the community once these extensions have been pulled out to be available for selection. +In v2.7.0, the built-in extensions aren't displayed under the **Available** tab. Therefore, you'll need to manually add the desired repos to install extensions. :::
@@ -45,7 +45,7 @@ In v2.7.0, the built-in extensions will not be displayed under the **Available** ![Manage repositories](/img/manage-repos.png) -5. Under the **Available** tab, click **Install** on the desired extension and version as in the example below. Note that you can easily update your extension as the button to **Update** will appear on the extension if one is available. +5. Under the **Available** tab, click **Install** on the desired extension and version as in the example below. You can also update your extension from this screen, as the button to **Update** will appear on the extension if one is available. ![Install Kubewarden](/img/install-kubewarden.png) @@ -53,9 +53,33 @@ In v2.7.0, the built-in extensions will not be displayed under the **Available** ![Reload button](/img/reload-button.png) +### Importing and Installing Extensions in an Air-Gapped Environment + +1. Find the address of the container image repository that you want to import as an extension. Rancher provides some extensions, such as Kubewarden and Elemental, through the `ui-plugin-catalog` container image at https://hub.docker.com/r/rancher/ui-plugin-catalog/tags. You should import and use the latest tagged version of the image to ensure you receive the latest features and security updates. + + * **(Optional)** If the container image is private: [Create](../how-to-guides/new-user-guides/kubernetes-resources-setup/secrets.md) a registry secret within the `cattle-ui-plugin-system` namespace. Enter the domain of the image address in the **Registry Domain Name** field. + +1. Click **☰**, then select **Extensions**, under **Configuration**. + +1. On the top right, click **⋮ > Manage Extension Catalogs**. + +1. Select the **Import Extension Catalog** button. + +1. Enter the image address in the **Catalog Image Reference** field. + + * **(Optional)** If the container image is private: Select the secret you just created from the **Pull Secrets** drop-down menu. + +1. Click **Load**. The extension will now be **Pending**. + +1. Return to the **Extensions** page. + +1. Select the **Available** tab, and click the **Reload** button to make sure that the list of extensions is up to date. + +1. Find the extension you just added, and click the **Install** button. + ## Uninstalling Extensions -There are two ways in which you can uninstall or disable your extensions: +There are two ways to uninstall or disable an extension: 1. Under the **Installed** tab, click the **Uninstall** button on the extension you wish to remove. @@ -71,17 +95,49 @@ You must reload the page after disabling extensions or display issues may occur. ::: -## Rolling Back Extensions +## Updating and Upgrading Extensions -Under the **Installed** tab, click the **Rollback** button on the extension you wish to roll back. +1. Click **☰ > Extensions** under **Configuration**. +1. Select the **Updates** tab. +1. Click **Update**. -![Roll back extensions](/img/roll-back-extension.png) +If there is a new version of the extension, there will also be an **Update** button visible on the associated card for the extension in the **Available** tab. -:::caution +### Updating and Upgrading an Extensions Repository in an Air-gapped Environment -You must reload the page after rolling back extensions or display issues may occur. +Extensions repositories that aren't air-gapped are automatically updated. If the repository is air-gapped, you must update it manually. -::: +First, mirror the latest changes to your private registry by following the same steps for initially [importing and installing an extension repository](#importing-and-installing-extensions-in-an-air-gapped-environment). + +After you mirror the latest changes, follow these steps: + +1. Click **☰ > Local**. +1. From the sidebar, select **Workloads > Deployments**. +1. From the namespaces dropdown menu, select **cattle-ui-plugin-system**. +1. Find the **cattle-ui-plugin-system** namespace. +1. Select the `ui-plugin-catalog` deployment. +1. Click **⋮ > Edit config**. +1. Update the **Container Image** field within the deployment's container with the latest image. +1. Click **Save**. + +## Deleting Helm Charts + +1. Click **☰**, then click on the name of your local cluster. +1. From the sidebar, select **Apps > Installed Apps**. +1. Find the name of the chart you want to delete and select the checkbox next to it. +1. Click **Delete**. + +## Deleting Extension Repositories + +1. Click **☰ > Extensions** under **Configuration**. +1. On the top right, click **⋮ > Manage Repositories**. +1. Find the name of the extension repository you want to delete. Select the checkbox next to the repository name, then click **Delete**. + +## Deleting Extension Repository Container Images + +1. Click **☰**, then select **Extensions**, under **Configuration**. +1. On the top right, click **⋮ > Manage Extension Catalogs**. +1. Find the name of the container image you want to delete. Click **⋮ > Uninstall**. ## Developing Extensions diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md index d09b5eca604c..99c4332b6339 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md @@ -245,6 +245,7 @@ If you don't intend to send telemetry data, opt out [telemetry](../../../../faq/ These resources could be helpful when installing Rancher: +- [Importing and installing extensions in an air-gapped environment](../../../../integrations-in-rancher/rancher-extensions.md#importing-and-installing-extensions-in-an-air-gapped-environment) - [Rancher Helm chart options](../../installation-references/helm-chart-options.md) - [Adding TLS secrets](../../resources/add-tls-secrets.md) - [Troubleshooting Rancher Kubernetes Installations](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md) diff --git a/versioned_docs/version-2.8/integrations-in-rancher/rancher-extensions.md b/versioned_docs/version-2.8/integrations-in-rancher/rancher-extensions.md index 34929c4bf113..2c56082fc534 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/rancher-extensions.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/rancher-extensions.md @@ -30,7 +30,7 @@ Examples of built-in Rancher extensions are Fleet, Explorer, and Harvester. Exam :::info -In v2.7.0, the built-in extensions will not be displayed under the **Available** tab. Therefore, you will need to manually add the desired repos to install extensions. We will update the community once these extensions have been pulled out to be available for selection. +In v2.7.0, the built-in extensions aren't displayed under the **Available** tab. Therefore, you'll need to manually add the desired repos to install extensions. :::
@@ -45,7 +45,7 @@ In v2.7.0, the built-in extensions will not be displayed under the **Available** ![Manage repositories](/img/manage-repos.png) -5. Under the **Available** tab, click **Install** on the desired extension and version as in the example below. Note that you can easily update your extension as the button to **Update** will appear on the extension if one is available. +5. Under the **Available** tab, click **Install** on the desired extension and version as in the example below. You can also update your extension from this screen, as the button to **Update** will appear on the extension if one is available. ![Install Kubewarden](/img/install-kubewarden.png) @@ -53,9 +53,33 @@ In v2.7.0, the built-in extensions will not be displayed under the **Available** ![Reload button](/img/reload-button.png) +### Importing and Installing Extensions in an Air-Gapped Environment + +1. Find the address of the container image repository that you want to import as an extension. Rancher provides some extensions, such as Kubewarden and Elemental, through the `ui-plugin-catalog` container image at https://hub.docker.com/r/rancher/ui-plugin-catalog/tags. You should import and use the latest tagged version of the image to ensure you receive the latest features and security updates. + + * **(Optional)** If the container image is private: [Create](../how-to-guides/new-user-guides/kubernetes-resources-setup/secrets.md) a registry secret within the `cattle-ui-plugin-system` namespace. Enter the domain of the image address in the **Registry Domain Name** field. + +1. Click **☰**, then select **Extensions**, under **Configuration**. + +1. On the top right, click **⋮ > Manage Extension Catalogs**. + +1. Select the **Import Extension Catalog** button. + +1. Enter the image address in the **Catalog Image Reference** field. + + * **(Optional)** If the container image is private: Select the secret you just created from the **Pull Secrets** drop-down menu. + +1. Click **Load**. The extension will now be **Pending**. + +1. Return to the **Extensions** page. + +1. Select the **Available** tab, and click the **Reload** button to make sure that the list of extensions is up to date. + +1. Find the extension you just added, and click the **Install** button. + ## Uninstalling Extensions -There are two ways in which you can uninstall or disable your extensions: +There are two ways to uninstall or disable an extension: 1. Under the **Installed** tab, click the **Uninstall** button on the extension you wish to remove. @@ -71,17 +95,49 @@ You must reload the page after disabling extensions or display issues may occur. ::: -## Rolling Back Extensions +## Updating and Upgrading Extensions -Under the **Installed** tab, click the **Rollback** button on the extension you wish to roll back. +1. Click **☰ > Extensions** under **Configuration**. +1. Select the **Updates** tab. +1. Click **Update**. -![Roll back extensions](/img/roll-back-extension.png) +If there is a new version of the extension, there will also be an **Update** button visible on the associated card for the extension in the **Available** tab. -:::caution +### Updating and Upgrading an Extensions Repository in an Air-gapped Environment -You must reload the page after rolling back extensions or display issues may occur. +Extensions repositories that aren't air-gapped are automatically updated. If the repository is air-gapped, you must update it manually. -::: +First, mirror the latest changes to your private registry by following the same steps for initially [importing and installing an extension repository](#importing-and-installing-extensions-in-an-air-gapped-environment). + +After you mirror the latest changes, follow these steps: + +1. Click **☰ > Local**. +1. From the sidebar, select **Workloads > Deployments**. +1. From the namespaces dropdown menu, select **cattle-ui-plugin-system**. +1. Find the **cattle-ui-plugin-system** namespace. +1. Select the `ui-plugin-catalog` deployment. +1. Click **⋮ > Edit config**. +1. Update the **Container Image** field within the deployment's container with the latest image. +1. Click **Save**. + +## Deleting Helm Charts + +1. Click **☰**, then click on the name of your local cluster. +1. From the sidebar, select **Apps > Installed Apps**. +1. Find the name of the chart you want to delete and select the checkbox next to it. +1. Click **Delete**. + +## Deleting Extension Repositories + +1. Click **☰ > Extensions** under **Configuration**. +1. On the top right, click **⋮ > Manage Repositories**. +1. Find the name of the extension repository you want to delete. Select the checkbox next to the repository name, then click **Delete**. + +## Deleting Extension Repository Container Images + +1. Click **☰**, then select **Extensions**, under **Configuration**. +1. On the top right, click **⋮ > Manage Extension Catalogs**. +1. Find the name of the container image you want to delete. Click **⋮ > Uninstall**. ## Developing Extensions From 79ac8762750fe5e0165e97929b52a93c9bca2d9c Mon Sep 17 00:00:00 2001 From: Jake Hyde <33796120+jakefhyde@users.noreply.github.com> Date: Wed, 6 Dec 2023 12:49:45 -0700 Subject: [PATCH 52/65] Add aws out of tree cloud provider install/upgrade docs (#844) * Add aws out of tree cloud provider install/upgrade docs * Add aws out of tree cloud provider install/upgrade docs * Add info for aws cloud provider * indentation fix * Address review comments * addressing review comments * Address review comments * syntax annotations, re-org sections, copy edits * even more copy edits * copy edits to note at top * addressing suggestions from slickwarren * Address review comments * copyedits * Fix numbering * Update docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md * update helm installation steps * 2.8 versioning * rm 'new in 2.7' from 2.8 * Update versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md * revert -- change intended for other branch * typo fixes * fix headings, fix casing * apply prev commit to 2.8 * Reorganizing AWS migration pages (#1015) This partially addresses https://github.com/rancher/rancher-docs/issues/991 (rename file `migrating-from-in-tree-to-out-of-tree` to shorter and reference vsphere) and also fixes problems on the open PR: Duplicate sections (removed), difficulty navigating the file (split into two), sections with similar titles (opting for tabs instead of headings). I created this on its own working branch because moving around large blocks of text was unwieldly and I didn't want to mess up my local version of 763-document-aws-out-of-tree-v2prov. The last tab block (Helm Chart Installation through UI) contains contain that seems to be entirely the same for RKE and RKE2. --------- Co-authored-by: Kinara Shah Co-authored-by: martyav --- .../set-up-cloud-providers/amazon.md | 760 +++++++++++++++--- .../migrate-to-out-of-tree-amazon.md | 196 +++++ ...e.md => migrate-to-out-of-tree-vsphere.md} | 6 +- docusaurus.config.js | 5 +- sidebars.js | 3 +- .../set-up-cloud-providers/amazon.md | 760 +++++++++++++++--- .../migrate-to-out-of-tree-amazon.md | 196 +++++ ...e.md => migrate-to-out-of-tree-vsphere.md} | 6 +- versioned_sidebars/version-2.8-sidebars.json | 3 +- 9 files changed, 1744 insertions(+), 191 deletions(-) create mode 100644 docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon.md rename docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/{migrate-from-in-tree-to-out-of-tree.md => migrate-to-out-of-tree-vsphere.md} (97%) create mode 100644 versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon.md rename versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/{migrate-from-in-tree-to-out-of-tree.md => migrate-to-out-of-tree-vsphere.md} (97%) diff --git a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md index 10700b418d4e..cf83a023df64 100644 --- a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md +++ b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md @@ -7,23 +7,27 @@ weight: 1 -When using the `Amazon` cloud provider, you can leverage the following capabilities: +:::note Important: -- **Load Balancers:** Launches an AWS Elastic Load Balancer (ELB) when choosing `Layer-4 Load Balancer` in **Port Mapping** or when launching a `Service` with `type: LoadBalancer`. -- **Persistent Volumes**: Allows you to use AWS Elastic Block Stores (EBS) for persistent volumes. +In Kubernetes 1.27 and later, you must use an out-of-tree AWS cloud provider. In-tree cloud providers have been deprecated. The Amazon cloud provider has been removed completely, and won't work after an upgrade to Kubernetes 1.27. The steps listed below are still required to set up an Amazon cloud provider. You can [set up an out-of-tree cloud provider for RKE](#using-the-out-of-tree-aws-cloud-provider-for-rke) after creating an IAM role and configuring the ClusterID. -See [cloud-provider-aws README](https://kubernetes.github.io/cloud-provider-aws/) for all information regarding the Amazon cloud provider. +You can also [migrate from an in-tree to an out-of-tree AWS cloud provider](./migrate-to-out-of-tree-amazon.md) on Kubernetes 1.26 and earlier. All existing clusters must migrate prior to upgrading to v1.27 in order to stay functional. -To set up the Amazon cloud provider, +Starting with Kubernetes 1.23, you must deactivate the `CSIMigrationAWS` feature gate to use the in-tree AWS cloud provider. You can do this by setting `feature-gates=CSIMigrationAWS=false` as an additional argument for the cluster's Kubelet, Controller Manager, API Server and Scheduler in the advanced cluster configuration. -1. [Create an IAM role and attach to the instances](#1-create-an-iam-role-and-attach-to-the-instances) -2. [Configure the ClusterID](#2-configure-the-clusterid) +::: -:::note Important: +When you use Amazon as a cloud provider, you can leverage the following capabilities: -Starting with Kubernetes 1.23, you have to deactivate the `CSIMigrationAWS` feature gate in order to use the in-tree AWS cloud provider. You can do this by setting `feature-gates=CSIMigrationAWS=false` as an additional argument for the cluster's Kubelet, Controller Manager, API Server and Scheduler in the advanced cluster configuration. +- **Load Balancers:** Launch an AWS Elastic Load Balancer (ELB) when you select `Layer-4 Load Balancer` in **Port Mapping** or when you launch a `Service` with `type: LoadBalancer`. +- **Persistent Volumes**: Use AWS Elastic Block Stores (EBS) for persistent volumes. -::: +See the [cloud-provider-aws README](https://kubernetes.github.io/cloud-provider-aws/) for more information about the Amazon cloud provider. + +To set up the Amazon cloud provider, + +1. [Create an IAM role and attach to the instances](#1-create-an-iam-role-and-attach-to-the-instances) +2. [Configure the ClusterID](#2-configure-the-clusterid) ### 1. Create an IAM Role and attach to the instances @@ -40,71 +44,71 @@ IAM Policy for nodes with the `controlplane` role: ```json { -"Version": "2012-10-17", -"Statement": [ - { - "Effect": "Allow", - "Action": [ - "autoscaling:DescribeAutoScalingGroups", - "autoscaling:DescribeLaunchConfigurations", - "autoscaling:DescribeTags", - "ec2:DescribeInstances", - "ec2:DescribeRegions", - "ec2:DescribeRouteTables", - "ec2:DescribeSecurityGroups", - "ec2:DescribeSubnets", - "ec2:DescribeVolumes", - "ec2:CreateSecurityGroup", - "ec2:CreateTags", - "ec2:CreateVolume", - "ec2:ModifyInstanceAttribute", - "ec2:ModifyVolume", - "ec2:AttachVolume", - "ec2:AuthorizeSecurityGroupIngress", - "ec2:CreateRoute", - "ec2:DeleteRoute", - "ec2:DeleteSecurityGroup", - "ec2:DeleteVolume", - "ec2:DetachVolume", - "ec2:RevokeSecurityGroupIngress", - "ec2:DescribeVpcs", - "elasticloadbalancing:AddTags", - "elasticloadbalancing:AttachLoadBalancerToSubnets", - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", - "elasticloadbalancing:CreateLoadBalancer", - "elasticloadbalancing:CreateLoadBalancerPolicy", - "elasticloadbalancing:CreateLoadBalancerListeners", - "elasticloadbalancing:ConfigureHealthCheck", - "elasticloadbalancing:DeleteLoadBalancer", - "elasticloadbalancing:DeleteLoadBalancerListeners", - "elasticloadbalancing:DescribeLoadBalancers", - "elasticloadbalancing:DescribeLoadBalancerAttributes", - "elasticloadbalancing:DetachLoadBalancerFromSubnets", - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", - "elasticloadbalancing:ModifyLoadBalancerAttributes", - "elasticloadbalancing:RegisterInstancesWithLoadBalancer", - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer", - "elasticloadbalancing:AddTags", - "elasticloadbalancing:CreateListener", - "elasticloadbalancing:CreateTargetGroup", - "elasticloadbalancing:DeleteListener", - "elasticloadbalancing:DeleteTargetGroup", - "elasticloadbalancing:DescribeListeners", - "elasticloadbalancing:DescribeLoadBalancerPolicies", - "elasticloadbalancing:DescribeTargetGroups", - "elasticloadbalancing:DescribeTargetHealth", - "elasticloadbalancing:ModifyListener", - "elasticloadbalancing:ModifyTargetGroup", - "elasticloadbalancing:RegisterTargets", - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener", - "iam:CreateServiceLinkedRole", - "kms:DescribeKey" - ], - "Resource": [ - "*" - ] - } -] + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "autoscaling:DescribeAutoScalingGroups", + "autoscaling:DescribeLaunchConfigurations", + "autoscaling:DescribeTags", + "ec2:DescribeInstances", + "ec2:DescribeRegions", + "ec2:DescribeRouteTables", + "ec2:DescribeSecurityGroups", + "ec2:DescribeSubnets", + "ec2:DescribeVolumes", + "ec2:CreateSecurityGroup", + "ec2:CreateTags", + "ec2:CreateVolume", + "ec2:ModifyInstanceAttribute", + "ec2:ModifyVolume", + "ec2:AttachVolume", + "ec2:AuthorizeSecurityGroupIngress", + "ec2:CreateRoute", + "ec2:DeleteRoute", + "ec2:DeleteSecurityGroup", + "ec2:DeleteVolume", + "ec2:DetachVolume", + "ec2:RevokeSecurityGroupIngress", + "ec2:DescribeVpcs", + "elasticloadbalancing:AddTags", + "elasticloadbalancing:AttachLoadBalancerToSubnets", + "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", + "elasticloadbalancing:CreateLoadBalancer", + "elasticloadbalancing:CreateLoadBalancerPolicy", + "elasticloadbalancing:CreateLoadBalancerListeners", + "elasticloadbalancing:ConfigureHealthCheck", + "elasticloadbalancing:DeleteLoadBalancer", + "elasticloadbalancing:DeleteLoadBalancerListeners", + "elasticloadbalancing:DescribeLoadBalancers", + "elasticloadbalancing:DescribeLoadBalancerAttributes", + "elasticloadbalancing:DetachLoadBalancerFromSubnets", + "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", + "elasticloadbalancing:ModifyLoadBalancerAttributes", + "elasticloadbalancing:RegisterInstancesWithLoadBalancer", + "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer", + "elasticloadbalancing:AddTags", + "elasticloadbalancing:CreateListener", + "elasticloadbalancing:CreateTargetGroup", + "elasticloadbalancing:DeleteListener", + "elasticloadbalancing:DeleteTargetGroup", + "elasticloadbalancing:DescribeListeners", + "elasticloadbalancing:DescribeLoadBalancerPolicies", + "elasticloadbalancing:DescribeTargetGroups", + "elasticloadbalancing:DescribeTargetHealth", + "elasticloadbalancing:ModifyListener", + "elasticloadbalancing:ModifyTargetGroup", + "elasticloadbalancing:RegisterTargets", + "elasticloadbalancing:SetLoadBalancerPoliciesOfListener", + "iam:CreateServiceLinkedRole", + "kms:DescribeKey" + ], + "Resource": [ + "*" + ] + } + ] } ``` @@ -112,24 +116,24 @@ IAM policy for nodes with the `etcd` or `worker` role: ```json { -"Version": "2012-10-17", -"Statement": [ + "Version": "2012-10-17", + "Statement": [ { - "Effect": "Allow", - "Action": [ - "ec2:DescribeInstances", - "ec2:DescribeRegions", - "ecr:GetAuthorizationToken", - "ecr:BatchCheckLayerAvailability", - "ecr:GetDownloadUrlForLayer", - "ecr:GetRepositoryPolicy", - "ecr:DescribeRepositories", - "ecr:ListImages", - "ecr:BatchGetImage" - ], - "Resource": "*" + "Effect": "Allow", + "Action": [ + "ec2:DescribeInstances", + "ec2:DescribeRegions", + "ecr:GetAuthorizationToken", + "ecr:BatchCheckLayerAvailability", + "ecr:GetDownloadUrlForLayer", + "ecr:GetRepositoryPolicy", + "ecr:DescribeRepositories", + "ecr:ListImages", + "ecr:BatchGetImage" + ], + "Resource": "*" } -] + ] } ``` @@ -161,6 +165,580 @@ If you share resources between clusters, you can change the tag to: The string value, ``, is the Kubernetes cluster's ID. +:::note + +Do not tag a resource with multiple owned or shared tags. + +::: + ### Using Amazon Elastic Container Registry (ECR) The kubelet component has the ability to automatically obtain ECR credentials, when the IAM profile mentioned in [Create an IAM Role and attach to the instances](#1-create-an-iam-role-and-attach-to-the-instances) is attached to the instance(s). When using a Kubernetes version older than v1.15.0, the Amazon cloud provider needs be configured in the cluster. Starting with Kubernetes version v1.15.0, the kubelet can obtain ECR credentials without having the Amazon cloud provider configured in the cluster. + +### Using the Out-of-Tree AWS Cloud Provider + + + + +1. [Node name conventions and other prerequisites](https://cloud-provider-aws.sigs.k8s.io/prerequisites/) must be followed for the cloud provider to find the instance correctly. + +2. Rancher managed RKE2/K3s clusters don't support configuring `providerID`. However, the engine will set the node name correctly if the following configuration is set on the provisioning cluster object: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + cloud-provider-name: aws +``` + +This option will be passed to the configuration of the various Kubernetes components that run on the node, and must be overridden per component to prevent the in-tree provider from running unintentionally: + + +**Override on Etcd:** + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + kubelet-arg: + - cloud-provider=external + machineLabelSelector: + matchExpressions: + - key: rke.cattle.io/etcd-role + operator: In + values: + - 'true' +``` + +**Override on Control Plane:** + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + disable-cloud-controller: true + kube-apiserver-arg: + - cloud-provider=external + kube-controller-manager-arg: + - cloud-provider=external + kubelet-arg: + - cloud-provider=external + machineLabelSelector: + matchExpressions: + - key: rke.cattle.io/control-plane-role + operator: In + values: + - 'true' +``` + +**Override on Worker:** + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + kubelet-arg: + - cloud-provider=external + machineLabelSelector: + matchExpressions: + - key: rke.cattle.io/worker-role + operator: In + values: + - 'true' +``` + +2. Select `Amazon` if relying on the above mechanism to set the provider ID. Otherwise, select **External (out-of-tree)** cloud provider, which sets `--cloud-provider=external` for Kubernetes components. + +3. Specify the `aws-cloud-controller-manager` Helm chart as an additional manifest to install: + +```yaml +spec: + rkeConfig: + additionalManifest: |- + apiVersion: helm.cattle.io/v1 + kind: HelmChart + metadata: + name: aws-cloud-controller-manager + namespace: kube-system + spec: + chart: aws-cloud-controller-manager + repo: https://kubernetes.github.io/cloud-provider-aws + targetNamespace: kube-system + bootstrap: true + valuesContent: |- + hostNetworking: true + nodeSelector: + node-role.kubernetes.io/control-plane: "true" + args: + - --configure-cloud-routes=false + - --v=5 + - --cloud-provider=aws +``` + + + + + +1. [Node name conventions and other prerequisites ](https://cloud-provider-aws.sigs.k8s.io/prerequisites/) must be followed so that the cloud provider can find the instance. Rancher provisioned clusters don't support configuring `providerID`. + +:::note + +If you use IP-based naming, the nodes must be named after the instance followed by the regional domain name (`ip-xxx-xxx-xxx-xxx.ec2..internal`). If you have a custom domain name set in the DHCP options, you must set `--hostname-override` on `kube-proxy` and `kubelet` to match this naming convention. + +::: + +To meet node naming conventions, Rancher allows setting `useInstanceMetadataHostname` when the `External Amazon` cloud provider is selected. Enabling `useInstanceMetadataHostname` will query ec2 metadata service and set `/hostname` as `hostname-override` for `kubelet` and `kube-proxy`: + +```yaml +rancher_kubernetes_engine_config: + cloud_provider: + name: external-aws + useInstanceMetadataHostname: true +``` + +You must not enable `useInstanceMetadataHostname` when setting custom values for `hostname-override` for custom clusters. When you create a [custom cluster](../../../../pages-for-subheaders/use-existing-nodes.md), add [`--node-name`](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) to the `docker run` node registration command to set `hostname-override` — for example, `"$(hostname -f)"`. This can be done manually or by using **Show Advanced Options** in the Rancher UI to add **Node Name**. + +2. Select the cloud provider. + +Selecting **External Amazon (out-of-tree)** sets `--cloud-provider=external` and enables `useInstanceMetadataHostname`. As mentioned in step 1, enabling `useInstanceMetadataHostname` will query the EC2 metadata service and set `http://169.254.169.254/latest/meta-data/hostname` as `hostname-override` for `kubelet` and `kube-proxy`. + +:::note + +You must disable `useInstanceMetadataHostname` when setting a custom node name for custom clusters via `node-name`. + +::: + +```yaml +rancher_kubernetes_engine_config: + cloud_provider: + name: external-aws + useInstanceMetadataHostname: true/false +``` + +Existing clusters that use an **External** cloud provider will set `--cloud-provider=external` for Kubernetes components but won't set the node name. + +3. Install the AWS cloud controller manager after the cluster finishes provisioning. Note that the cluster isn't successfully provisioned and nodes are still in an `uninitialized` state until you deploy the cloud controller manager. This can be done manually, or via [Helm charts in UI](#helm-chart-installation-from-ui). + +Refer to the offical AWS upstream documentation for the [cloud controller manager](https://kubernetes.github.io/cloud-provider-aws). + + + + +### Helm Chart Installation from CLI + + + + +Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on Github. + +1. Add the Helm repository: + +```shell +helm repo add aws-cloud-controller-manager https://kubernetes.github.io/cloud-provider-aws +helm repo update +``` + +2. Create a `values.yaml` file with the following contents to override the default `values.yaml`: + +```yaml +# values.yaml +hostNetworking: true +tolerations: + - effect: NoSchedule + key: node.cloudprovider.kubernetes.io/uninitialized + value: 'true' + - effect: NoSchedule + value: 'true' + key: node-role.kubernetes.io/controlplane +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +args: + - --configure-cloud-routes=false + - --use-service-account-credentials=true + - --v=2 + - --cloud-provider=aws +clusterRoleRules: + - apiGroups: + - "" + resources: + - events + verbs: + - create + - patch + - update + - apiGroups: + - "" + resources: + - nodes + verbs: + - '*' + - apiGroups: + - "" + resources: + - nodes/status + verbs: + - patch + - apiGroups: + - "" + resources: + - services + verbs: + - list + - patch + - update + - watch + - apiGroups: + - "" + resources: + - services/status + verbs: + - list + - patch + - update + - watch + - apiGroups: + - '' + resources: + - serviceaccounts + verbs: + - create + - get + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - update + - watch + - apiGroups: + - "" + resources: + - endpoints + verbs: + - create + - get + - list + - watch + - update + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - get + - list + - watch + - update + - apiGroups: + - "" + resources: + - serviceaccounts/token + verbs: + - create +``` + +3. Install the Helm chart: + +```shell +helm upgrade --install aws-cloud-controller-manager aws-cloud-controller-manager/aws-cloud-controller-manager --values values.yaml +``` + +Verify that the Helm chart installed successfully: + +```shell +helm status -n kube-system aws-cloud-controller-manager +``` + +4. (Optional) Verify that the cloud controller manager update succeeded: + +```shell +kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager +``` + + + + + +Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on Github. + +1. Add the Helm repository: + +```shell +helm repo add aws-cloud-controller-manager https://kubernetes.github.io/cloud-provider-aws +helm repo update +``` + +2. Create a `values.yaml` file with the following contents, to override the default `values.yaml`: + +```yaml +# values.yaml +hostNetworking: true +tolerations: + - effect: NoSchedule + key: node.cloudprovider.kubernetes.io/uninitialized + value: 'true' + - effect: NoSchedule + value: 'true' + key: node-role.kubernetes.io/controlplane +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +args: + - --configure-cloud-routes=false + - --use-service-account-credentials=true + - --v=2 + - --cloud-provider=aws +clusterRoleRules: + - apiGroups: + - "" + resources: + - events + verbs: + - create + - patch + - update + - apiGroups: + - "" + resources: + - nodes + verbs: + - '*' + - apiGroups: + - "" + resources: + - nodes/status + verbs: + - patch + - apiGroups: + - "" + resources: + - services + verbs: + - list + - patch + - update + - watch + - apiGroups: + - "" + resources: + - services/status + verbs: + - list + - patch + - update + - watch + - apiGroups: + - '' + resources: + - serviceaccounts + verbs: + - create + - get + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - update + - watch + - apiGroups: + - "" + resources: + - endpoints + verbs: + - create + - get + - list + - watch + - update + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - get + - list + - watch + - update + - apiGroups: + - "" + resources: + - serviceaccounts/token + verbs: + - create +``` + +3. Install the Helm chart: + +```shell +helm upgrade --install aws-cloud-controller-manager -n kube-system aws-cloud-controller-manager/aws-cloud-controller-manager --values values.yaml +``` + +Verify that the Helm chart installed successfully: + +```shell +helm status -n kube-system aws-cloud-controller-manager +``` + +4. If present, edit the Daemonset to remove the default node selector `node-role.kubernetes.io/control-plane: ""`: + +```shell +kubectl edit daemonset aws-cloud-controller-manager -n kube-system +``` + +5. (Optional) Verify that the cloud controller manager update succeeded: + +```shell +kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager +``` + + + + +### Helm Chart Installation from UI + + + + +1. Click **☰**, then select the name of the cluster from the left navigation. + +2. Select **Apps** > **Repositories**. + +3. Click the **Create** button. + +4. Enter `https://kubernetes.github.io/cloud-provider-aws` in the **Index URL** field. + +5. Select **Apps** > **Charts** from the left navigation and install **aws-cloud-controller-manager**. + +6. Select the namespace, `kube-system`, and enable **Customize Helm options before install**. + +7. Add the following container arguments: + +```yaml + - '--use-service-account-credentials=true' + - '--configure-cloud-routes=false' +``` + +8. Add `get` to `verbs` for `serviceaccounts` resources in `clusterRoleRules`. This allows the cloud controller manager to get service accounts upon startup. + +```yaml + - apiGroups: + - '' + resources: + - serviceaccounts + verbs: + - create + - get +``` + +9. Rancher-provisioned RKE nodes are tainted `node-role.kubernetes.io/controlplane`. Update tolerations and the nodeSelector: + +```yaml +tolerations: + - effect: NoSchedule + key: node.cloudprovider.kubernetes.io/uninitialized + value: 'true' + - effect: NoSchedule + value: 'true' + key: node-role.kubernetes.io/controlplane + +``` + +```yaml +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +``` + +:::note + +There's currently a [known issue](https://github.com/rancher/dashboard/issues/9249) where nodeSelector can't be updated from the Rancher UI. Continue installing the chart and then edit the Daemonset manually to set the `nodeSelector`: + +```yaml +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +``` + +::: + +10. Install the chart and confirm that the Daemonset `aws-cloud-controller-manager` is running. Verify `aws-cloud-controller-manager` pods are running in target namespace (`kube-system` unless modified in step 6). + + + + + +1. Click **☰**, then select the name of the cluster from the left navigation. + +2. Select **Apps** > **Repositories**. + +3. Click the **Create** button. + +4. Enter `https://kubernetes.github.io/cloud-provider-aws` in the **Index URL** field. + +5. Select **Apps** > **Charts** from the left navigation and install **aws-cloud-controller-manager**. + +6. Select the namespace, `kube-system`, and enable **Customize Helm options before install**. + +7. Add the following container arguments: + +```yaml + - '--use-service-account-credentials=true' + - '--configure-cloud-routes=false' +``` + +8. Add `get` to `verbs` for `serviceaccounts` resources in `clusterRoleRules`. This allows the cloud controller manager to get service accounts upon startup: + +```yaml + - apiGroups: + - '' + resources: + - serviceaccounts + verbs: + - create + - get +``` + +9. Rancher-provisioned RKE nodes are tainted `node-role.kubernetes.io/controlplane`. Update tolerations and the nodeSelector: + +```yaml +tolerations: + - effect: NoSchedule + key: node.cloudprovider.kubernetes.io/uninitialized + value: 'true' + - effect: NoSchedule + value: 'true' + key: node-role.kubernetes.io/controlplane + +``` + +```yaml +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +``` + +:::note + +There's currently a [known issue](https://github.com/rancher/dashboard/issues/9249) where `nodeSelector` can't be updated from the Rancher UI. Continue installing the chart and then Daemonset manually to set the `nodeSelector`: + +``` yaml +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +``` + +::: + +10. Install the chart and confirm that the Daemonset `aws-cloud-controller-manager` deploys successfully: + +```shell +kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager +``` + + + diff --git a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon.md b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon.md new file mode 100644 index 000000000000..c65bef6ec155 --- /dev/null +++ b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon.md @@ -0,0 +1,196 @@ +--- +title: Migrating Amazon In-tree to Out-of-tree +--- + + + + + +Kubernetes is moving away from maintaining cloud providers in-tree. In Kubernetes 1.27 and later, the in-tree cloud providers have been removed. + +You can migrate from an in-tree to an out-of-tree AWS cloud provider on Kubernetes 1.26 and earlier. All existing clusters must migrate prior to upgrading to v1.27 in order to stay functional. + +To migrate from the in-tree cloud provider to the out-of-tree AWS cloud provider, you must stop the existing cluster's kube controller manager and install the AWS cloud controller manager. There are many ways to do this. Refer to the official AWS documentation on the [external cloud controller manager](https://cloud-provider-aws.sigs.k8s.io/getting_started/) for details. + +If it's acceptable to have some downtime, you can [switch to an external cloud provider](./amazon.md#using-the-out-of-tree-aws-cloud-provider-for-rke), which removes in-tree components and then deploy charts to install the AWS cloud controller manager. + +If your setup can't tolerate any control plane downtime, you must enable leader migration. This facilitates a smooth transition from the controllers in the kube controller manager to their counterparts in the cloud controller manager. Refer to the official AWS documentation on [Using leader migration](https://cloud-provider-aws.sigs.k8s.io/getting_started/) for more details. + +:::note Important: +The Kubernetes [cloud controller migration documentation](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/#before-you-begin) states that it's possible to migrate with the same Kubernetes version, but assumes that the migration is part of a Kubernetes upgrade. Refer to the Kubernetes documentation on [migrating to use the cloud controller manager](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/) to see if you need to customize your setup before migrating. Confirm your [migration configuration values](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/#default-configuration). If your cloud provider provides an implementation of the Node IPAM controller, you also need to [migrate the IPAM controller](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/#node-ipam-controller-migration). +::: + + + + +1. Update the cluster config to enable leader migration: + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + kube-controller-manager-arg: + - enable-leader-migration + machineLabelSelector: + matchExpressions: + - key: rke.cattle.io/control-plane-role + operator: In + values: + - 'true' +``` + +Note that the cloud provider is still `aws` at this step: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + cloud-provider-name: aws +``` + +2. Cordon control plane nodes so that AWS cloud controller pods run on nodes only after upgrading to the external cloud provider: + +```shell +kubectl cordon -l "node-role.kubernetes.io/controlplane=true" +``` + +3. To install the AWS cloud controller manager with leader migration enabled, follow Steps 1-3 for [deploying the cloud controller manager chart](./amazon.md#using-out-of-tree-aws-cloud-provider-for-rke2) +From Kubernetes 1.22 onwards, the kube-controller-manager will utilize a default configuration which will satisfy the controller-to-manager migration. +Update container args of the `aws-cloud-controller-manager` under `spec.rkeConfig.additionalManifest` to enable leader migration: + +```shell +- '--enable-leader-migration=true' +``` + +4. Install the chart and confirm that the Daemonset `aws-cloud-controller-manager` successfully deployed: + +```shell +kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager +``` + +5. Update the provisioning cluster to change the cloud provider and remove leader migration args from the kube controller. +If upgrading the Kubernetes version, set the Kubernetes version as well in the `spec.kubernetesVersion` section of the cluster YAML file + +:::note Important + +Only remove `cloud-provider-name: aws` if not relying on the rke2 supervisor to correctly set the providerID. + +::: + +Remove `enable-leader-migration` if you don't want it enabled in your cluster: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + cloud-provider-name: external +``` + +Remove `enable-leader-migration` from: + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + kube-controller-manager-arg: + - enable-leader-migration + machineLabelSelector: + matchExpressions: + - key: rke.cattle.io/control-plane-role + operator: In + values: + - 'true' +``` + +:::tip +You can also disable leader migration after the upgrade, as leader migration is no longer required due to only one cloud-controller-manager and can be removed. +Upgrade the chart and remove the following section from the container arguments: + +```yaml +- --enable-leader-migration=true +``` +::: + +Verify the cloud controller manager update was successfully rolled out with the following command: + +```shell +kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager +``` + +6. The cloud provider is responsible for setting the ProviderID of the node. Check if all nodes are initialized with the ProviderID: + +```shell +kubectl describe nodes | grep "ProviderID" +``` + + + + + +1. Update the cluster config to enable leader migration in `cluster.yml`: + +```yaml +services: + kube-controller: + extra_args: + enable-leader-migration: "true" +``` + +Note that the cloud provider is still `aws` at this step: + +```yaml +cloud_provider: + name: aws +``` + +2. Cordon the control plane nodes, so that AWS cloud controller pods run on nodes only after upgrading to the external cloud provider: + +```shell +kubectl cordon -l "node-role.kubernetes.io/controlplane=true" +``` + +3. To install the AWS cloud controller manager, you must enable leader migration and follow the same steps as when installing AWS on a new cluster. To enable leader migration, add the following to the container arguments in step 7 while following the [steps to install the chart](./amazon.md#helm-chart-installation-from-ui-for-rke): + +```yaml +- '--enable-leader-migration=true' +``` + +4. Confirm that the chart is installed but that the new pods aren't running yet due to cordoned controlplane nodes. After updating the cluster in the next step, RKE will upgrade and uncordon each node, and schedule `aws-controller-manager` pods. + +5. Update `cluster.yml` to change the cloud provider and remove the leader migration arguments from the kube-controller. + + Selecting **External Amazon (out-of-tree)** sets `--cloud-provider=external` and lets you enable `useInstanceMetadataHostname`. You must enable `useInstanceMetadataHostname` for node-driver clusters and for custom clusters if not you don't provide a custom node name via `--node-name`. Enabling `useInstanceMetadataHostname` will query ec2 metadata service and set `/hostname` as `hostname-override` for `kubelet` and `kube-proxy`: + +```yaml +rancher_kubernetes_engine_config: + cloud_provider: + name: external-aws + useInstanceMetadataHostname: true/false +``` + + Remove `enable-leader-migration` if you don't want it enabled in your cluster: + + ```yaml + services: + kube-controller: + extra_args: + enable-leader-migration: "true" + ``` + +:::tip +You can also disable leader migration after you finish the migration. Upgrade the chart and remove the following section from the container arguments: + +```yaml +- --enable-leader-migration=true +``` +::: + +6. If you're upgrading the cluster's Kubernetes version, set the Kubernetes version as well. + +7. Update the cluster. The `aws-cloud-controller-manager` pods should now be running. + + + + diff --git a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree.md b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere.md similarity index 97% rename from docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree.md rename to docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere.md index d302213118ac..a3bc9b89d2db 100644 --- a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree.md +++ b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere.md @@ -1,5 +1,5 @@ --- -title: Migrating vSphere In-tree Volumes to Out-of-tree +title: Migrating vSphere In-tree to Out-of-tree --- @@ -64,7 +64,7 @@ Once all nodes are tainted by the running the script, launch the Helm vSphere CP 1. Click **☰ > Cluster Management**. 1. Go to the cluster where the vSphere CPI chart will be installed and click **Explore**. 1. Click **Apps > Charts**. -1. Click **vSphere CPI**.. +1. Click **vSphere CPI**. 1. Click **Install**. 1. Fill out the required vCenter details and click **Install**. @@ -81,7 +81,7 @@ kubectl describe nodes | grep "ProviderID" 1. Click **☰ > Cluster Management**. 1. Go to the cluster where the vSphere CSI chart will be installed and click **Explore**. 1. Click **Apps > Charts**. -1. Click **vSphere CSI**.. +1. Click **vSphere CSI**. 1. Click **Install**. 1. Fill out the required vCenter details and click **Install**. 1. Check **Customize Helm options before install** and click **Next**. diff --git a/docusaurus.config.js b/docusaurus.config.js index f398240e0098..91447e069503 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -582,9 +582,12 @@ module.exports = { from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/vsphere/configure-out-of-tree-vsphere' }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree', + to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere', from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/vsphere/migrate-from-in-tree-to-out-of-tree' }, + { to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere', + from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree' + }, { to: '/how-to-guides/new-user-guides/add-users-to-projects', from: '/how-to-guides/advanced-user-guides/manage-projects/add-users-to-projects' diff --git a/sidebars.js b/sidebars.js index 8aed9ca6eced..e3e344f7ff32 100644 --- a/sidebars.js +++ b/sidebars.js @@ -493,11 +493,12 @@ const sidebars = { }, items: [ "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon", + "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/azure", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/google-compute-engine", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/configure-in-tree-vsphere", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/configure-out-of-tree-vsphere", - "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree", + "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere", ] }, "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters", diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md index 10700b418d4e..cf83a023df64 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md @@ -7,23 +7,27 @@ weight: 1 -When using the `Amazon` cloud provider, you can leverage the following capabilities: +:::note Important: -- **Load Balancers:** Launches an AWS Elastic Load Balancer (ELB) when choosing `Layer-4 Load Balancer` in **Port Mapping** or when launching a `Service` with `type: LoadBalancer`. -- **Persistent Volumes**: Allows you to use AWS Elastic Block Stores (EBS) for persistent volumes. +In Kubernetes 1.27 and later, you must use an out-of-tree AWS cloud provider. In-tree cloud providers have been deprecated. The Amazon cloud provider has been removed completely, and won't work after an upgrade to Kubernetes 1.27. The steps listed below are still required to set up an Amazon cloud provider. You can [set up an out-of-tree cloud provider for RKE](#using-the-out-of-tree-aws-cloud-provider-for-rke) after creating an IAM role and configuring the ClusterID. -See [cloud-provider-aws README](https://kubernetes.github.io/cloud-provider-aws/) for all information regarding the Amazon cloud provider. +You can also [migrate from an in-tree to an out-of-tree AWS cloud provider](./migrate-to-out-of-tree-amazon.md) on Kubernetes 1.26 and earlier. All existing clusters must migrate prior to upgrading to v1.27 in order to stay functional. -To set up the Amazon cloud provider, +Starting with Kubernetes 1.23, you must deactivate the `CSIMigrationAWS` feature gate to use the in-tree AWS cloud provider. You can do this by setting `feature-gates=CSIMigrationAWS=false` as an additional argument for the cluster's Kubelet, Controller Manager, API Server and Scheduler in the advanced cluster configuration. -1. [Create an IAM role and attach to the instances](#1-create-an-iam-role-and-attach-to-the-instances) -2. [Configure the ClusterID](#2-configure-the-clusterid) +::: -:::note Important: +When you use Amazon as a cloud provider, you can leverage the following capabilities: -Starting with Kubernetes 1.23, you have to deactivate the `CSIMigrationAWS` feature gate in order to use the in-tree AWS cloud provider. You can do this by setting `feature-gates=CSIMigrationAWS=false` as an additional argument for the cluster's Kubelet, Controller Manager, API Server and Scheduler in the advanced cluster configuration. +- **Load Balancers:** Launch an AWS Elastic Load Balancer (ELB) when you select `Layer-4 Load Balancer` in **Port Mapping** or when you launch a `Service` with `type: LoadBalancer`. +- **Persistent Volumes**: Use AWS Elastic Block Stores (EBS) for persistent volumes. -::: +See the [cloud-provider-aws README](https://kubernetes.github.io/cloud-provider-aws/) for more information about the Amazon cloud provider. + +To set up the Amazon cloud provider, + +1. [Create an IAM role and attach to the instances](#1-create-an-iam-role-and-attach-to-the-instances) +2. [Configure the ClusterID](#2-configure-the-clusterid) ### 1. Create an IAM Role and attach to the instances @@ -40,71 +44,71 @@ IAM Policy for nodes with the `controlplane` role: ```json { -"Version": "2012-10-17", -"Statement": [ - { - "Effect": "Allow", - "Action": [ - "autoscaling:DescribeAutoScalingGroups", - "autoscaling:DescribeLaunchConfigurations", - "autoscaling:DescribeTags", - "ec2:DescribeInstances", - "ec2:DescribeRegions", - "ec2:DescribeRouteTables", - "ec2:DescribeSecurityGroups", - "ec2:DescribeSubnets", - "ec2:DescribeVolumes", - "ec2:CreateSecurityGroup", - "ec2:CreateTags", - "ec2:CreateVolume", - "ec2:ModifyInstanceAttribute", - "ec2:ModifyVolume", - "ec2:AttachVolume", - "ec2:AuthorizeSecurityGroupIngress", - "ec2:CreateRoute", - "ec2:DeleteRoute", - "ec2:DeleteSecurityGroup", - "ec2:DeleteVolume", - "ec2:DetachVolume", - "ec2:RevokeSecurityGroupIngress", - "ec2:DescribeVpcs", - "elasticloadbalancing:AddTags", - "elasticloadbalancing:AttachLoadBalancerToSubnets", - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", - "elasticloadbalancing:CreateLoadBalancer", - "elasticloadbalancing:CreateLoadBalancerPolicy", - "elasticloadbalancing:CreateLoadBalancerListeners", - "elasticloadbalancing:ConfigureHealthCheck", - "elasticloadbalancing:DeleteLoadBalancer", - "elasticloadbalancing:DeleteLoadBalancerListeners", - "elasticloadbalancing:DescribeLoadBalancers", - "elasticloadbalancing:DescribeLoadBalancerAttributes", - "elasticloadbalancing:DetachLoadBalancerFromSubnets", - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", - "elasticloadbalancing:ModifyLoadBalancerAttributes", - "elasticloadbalancing:RegisterInstancesWithLoadBalancer", - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer", - "elasticloadbalancing:AddTags", - "elasticloadbalancing:CreateListener", - "elasticloadbalancing:CreateTargetGroup", - "elasticloadbalancing:DeleteListener", - "elasticloadbalancing:DeleteTargetGroup", - "elasticloadbalancing:DescribeListeners", - "elasticloadbalancing:DescribeLoadBalancerPolicies", - "elasticloadbalancing:DescribeTargetGroups", - "elasticloadbalancing:DescribeTargetHealth", - "elasticloadbalancing:ModifyListener", - "elasticloadbalancing:ModifyTargetGroup", - "elasticloadbalancing:RegisterTargets", - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener", - "iam:CreateServiceLinkedRole", - "kms:DescribeKey" - ], - "Resource": [ - "*" - ] - } -] + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "autoscaling:DescribeAutoScalingGroups", + "autoscaling:DescribeLaunchConfigurations", + "autoscaling:DescribeTags", + "ec2:DescribeInstances", + "ec2:DescribeRegions", + "ec2:DescribeRouteTables", + "ec2:DescribeSecurityGroups", + "ec2:DescribeSubnets", + "ec2:DescribeVolumes", + "ec2:CreateSecurityGroup", + "ec2:CreateTags", + "ec2:CreateVolume", + "ec2:ModifyInstanceAttribute", + "ec2:ModifyVolume", + "ec2:AttachVolume", + "ec2:AuthorizeSecurityGroupIngress", + "ec2:CreateRoute", + "ec2:DeleteRoute", + "ec2:DeleteSecurityGroup", + "ec2:DeleteVolume", + "ec2:DetachVolume", + "ec2:RevokeSecurityGroupIngress", + "ec2:DescribeVpcs", + "elasticloadbalancing:AddTags", + "elasticloadbalancing:AttachLoadBalancerToSubnets", + "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", + "elasticloadbalancing:CreateLoadBalancer", + "elasticloadbalancing:CreateLoadBalancerPolicy", + "elasticloadbalancing:CreateLoadBalancerListeners", + "elasticloadbalancing:ConfigureHealthCheck", + "elasticloadbalancing:DeleteLoadBalancer", + "elasticloadbalancing:DeleteLoadBalancerListeners", + "elasticloadbalancing:DescribeLoadBalancers", + "elasticloadbalancing:DescribeLoadBalancerAttributes", + "elasticloadbalancing:DetachLoadBalancerFromSubnets", + "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", + "elasticloadbalancing:ModifyLoadBalancerAttributes", + "elasticloadbalancing:RegisterInstancesWithLoadBalancer", + "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer", + "elasticloadbalancing:AddTags", + "elasticloadbalancing:CreateListener", + "elasticloadbalancing:CreateTargetGroup", + "elasticloadbalancing:DeleteListener", + "elasticloadbalancing:DeleteTargetGroup", + "elasticloadbalancing:DescribeListeners", + "elasticloadbalancing:DescribeLoadBalancerPolicies", + "elasticloadbalancing:DescribeTargetGroups", + "elasticloadbalancing:DescribeTargetHealth", + "elasticloadbalancing:ModifyListener", + "elasticloadbalancing:ModifyTargetGroup", + "elasticloadbalancing:RegisterTargets", + "elasticloadbalancing:SetLoadBalancerPoliciesOfListener", + "iam:CreateServiceLinkedRole", + "kms:DescribeKey" + ], + "Resource": [ + "*" + ] + } + ] } ``` @@ -112,24 +116,24 @@ IAM policy for nodes with the `etcd` or `worker` role: ```json { -"Version": "2012-10-17", -"Statement": [ + "Version": "2012-10-17", + "Statement": [ { - "Effect": "Allow", - "Action": [ - "ec2:DescribeInstances", - "ec2:DescribeRegions", - "ecr:GetAuthorizationToken", - "ecr:BatchCheckLayerAvailability", - "ecr:GetDownloadUrlForLayer", - "ecr:GetRepositoryPolicy", - "ecr:DescribeRepositories", - "ecr:ListImages", - "ecr:BatchGetImage" - ], - "Resource": "*" + "Effect": "Allow", + "Action": [ + "ec2:DescribeInstances", + "ec2:DescribeRegions", + "ecr:GetAuthorizationToken", + "ecr:BatchCheckLayerAvailability", + "ecr:GetDownloadUrlForLayer", + "ecr:GetRepositoryPolicy", + "ecr:DescribeRepositories", + "ecr:ListImages", + "ecr:BatchGetImage" + ], + "Resource": "*" } -] + ] } ``` @@ -161,6 +165,580 @@ If you share resources between clusters, you can change the tag to: The string value, ``, is the Kubernetes cluster's ID. +:::note + +Do not tag a resource with multiple owned or shared tags. + +::: + ### Using Amazon Elastic Container Registry (ECR) The kubelet component has the ability to automatically obtain ECR credentials, when the IAM profile mentioned in [Create an IAM Role and attach to the instances](#1-create-an-iam-role-and-attach-to-the-instances) is attached to the instance(s). When using a Kubernetes version older than v1.15.0, the Amazon cloud provider needs be configured in the cluster. Starting with Kubernetes version v1.15.0, the kubelet can obtain ECR credentials without having the Amazon cloud provider configured in the cluster. + +### Using the Out-of-Tree AWS Cloud Provider + + + + +1. [Node name conventions and other prerequisites](https://cloud-provider-aws.sigs.k8s.io/prerequisites/) must be followed for the cloud provider to find the instance correctly. + +2. Rancher managed RKE2/K3s clusters don't support configuring `providerID`. However, the engine will set the node name correctly if the following configuration is set on the provisioning cluster object: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + cloud-provider-name: aws +``` + +This option will be passed to the configuration of the various Kubernetes components that run on the node, and must be overridden per component to prevent the in-tree provider from running unintentionally: + + +**Override on Etcd:** + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + kubelet-arg: + - cloud-provider=external + machineLabelSelector: + matchExpressions: + - key: rke.cattle.io/etcd-role + operator: In + values: + - 'true' +``` + +**Override on Control Plane:** + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + disable-cloud-controller: true + kube-apiserver-arg: + - cloud-provider=external + kube-controller-manager-arg: + - cloud-provider=external + kubelet-arg: + - cloud-provider=external + machineLabelSelector: + matchExpressions: + - key: rke.cattle.io/control-plane-role + operator: In + values: + - 'true' +``` + +**Override on Worker:** + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + kubelet-arg: + - cloud-provider=external + machineLabelSelector: + matchExpressions: + - key: rke.cattle.io/worker-role + operator: In + values: + - 'true' +``` + +2. Select `Amazon` if relying on the above mechanism to set the provider ID. Otherwise, select **External (out-of-tree)** cloud provider, which sets `--cloud-provider=external` for Kubernetes components. + +3. Specify the `aws-cloud-controller-manager` Helm chart as an additional manifest to install: + +```yaml +spec: + rkeConfig: + additionalManifest: |- + apiVersion: helm.cattle.io/v1 + kind: HelmChart + metadata: + name: aws-cloud-controller-manager + namespace: kube-system + spec: + chart: aws-cloud-controller-manager + repo: https://kubernetes.github.io/cloud-provider-aws + targetNamespace: kube-system + bootstrap: true + valuesContent: |- + hostNetworking: true + nodeSelector: + node-role.kubernetes.io/control-plane: "true" + args: + - --configure-cloud-routes=false + - --v=5 + - --cloud-provider=aws +``` + + + + + +1. [Node name conventions and other prerequisites ](https://cloud-provider-aws.sigs.k8s.io/prerequisites/) must be followed so that the cloud provider can find the instance. Rancher provisioned clusters don't support configuring `providerID`. + +:::note + +If you use IP-based naming, the nodes must be named after the instance followed by the regional domain name (`ip-xxx-xxx-xxx-xxx.ec2..internal`). If you have a custom domain name set in the DHCP options, you must set `--hostname-override` on `kube-proxy` and `kubelet` to match this naming convention. + +::: + +To meet node naming conventions, Rancher allows setting `useInstanceMetadataHostname` when the `External Amazon` cloud provider is selected. Enabling `useInstanceMetadataHostname` will query ec2 metadata service and set `/hostname` as `hostname-override` for `kubelet` and `kube-proxy`: + +```yaml +rancher_kubernetes_engine_config: + cloud_provider: + name: external-aws + useInstanceMetadataHostname: true +``` + +You must not enable `useInstanceMetadataHostname` when setting custom values for `hostname-override` for custom clusters. When you create a [custom cluster](../../../../pages-for-subheaders/use-existing-nodes.md), add [`--node-name`](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) to the `docker run` node registration command to set `hostname-override` — for example, `"$(hostname -f)"`. This can be done manually or by using **Show Advanced Options** in the Rancher UI to add **Node Name**. + +2. Select the cloud provider. + +Selecting **External Amazon (out-of-tree)** sets `--cloud-provider=external` and enables `useInstanceMetadataHostname`. As mentioned in step 1, enabling `useInstanceMetadataHostname` will query the EC2 metadata service and set `http://169.254.169.254/latest/meta-data/hostname` as `hostname-override` for `kubelet` and `kube-proxy`. + +:::note + +You must disable `useInstanceMetadataHostname` when setting a custom node name for custom clusters via `node-name`. + +::: + +```yaml +rancher_kubernetes_engine_config: + cloud_provider: + name: external-aws + useInstanceMetadataHostname: true/false +``` + +Existing clusters that use an **External** cloud provider will set `--cloud-provider=external` for Kubernetes components but won't set the node name. + +3. Install the AWS cloud controller manager after the cluster finishes provisioning. Note that the cluster isn't successfully provisioned and nodes are still in an `uninitialized` state until you deploy the cloud controller manager. This can be done manually, or via [Helm charts in UI](#helm-chart-installation-from-ui). + +Refer to the offical AWS upstream documentation for the [cloud controller manager](https://kubernetes.github.io/cloud-provider-aws). + + + + +### Helm Chart Installation from CLI + + + + +Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on Github. + +1. Add the Helm repository: + +```shell +helm repo add aws-cloud-controller-manager https://kubernetes.github.io/cloud-provider-aws +helm repo update +``` + +2. Create a `values.yaml` file with the following contents to override the default `values.yaml`: + +```yaml +# values.yaml +hostNetworking: true +tolerations: + - effect: NoSchedule + key: node.cloudprovider.kubernetes.io/uninitialized + value: 'true' + - effect: NoSchedule + value: 'true' + key: node-role.kubernetes.io/controlplane +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +args: + - --configure-cloud-routes=false + - --use-service-account-credentials=true + - --v=2 + - --cloud-provider=aws +clusterRoleRules: + - apiGroups: + - "" + resources: + - events + verbs: + - create + - patch + - update + - apiGroups: + - "" + resources: + - nodes + verbs: + - '*' + - apiGroups: + - "" + resources: + - nodes/status + verbs: + - patch + - apiGroups: + - "" + resources: + - services + verbs: + - list + - patch + - update + - watch + - apiGroups: + - "" + resources: + - services/status + verbs: + - list + - patch + - update + - watch + - apiGroups: + - '' + resources: + - serviceaccounts + verbs: + - create + - get + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - update + - watch + - apiGroups: + - "" + resources: + - endpoints + verbs: + - create + - get + - list + - watch + - update + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - get + - list + - watch + - update + - apiGroups: + - "" + resources: + - serviceaccounts/token + verbs: + - create +``` + +3. Install the Helm chart: + +```shell +helm upgrade --install aws-cloud-controller-manager aws-cloud-controller-manager/aws-cloud-controller-manager --values values.yaml +``` + +Verify that the Helm chart installed successfully: + +```shell +helm status -n kube-system aws-cloud-controller-manager +``` + +4. (Optional) Verify that the cloud controller manager update succeeded: + +```shell +kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager +``` + + + + + +Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on Github. + +1. Add the Helm repository: + +```shell +helm repo add aws-cloud-controller-manager https://kubernetes.github.io/cloud-provider-aws +helm repo update +``` + +2. Create a `values.yaml` file with the following contents, to override the default `values.yaml`: + +```yaml +# values.yaml +hostNetworking: true +tolerations: + - effect: NoSchedule + key: node.cloudprovider.kubernetes.io/uninitialized + value: 'true' + - effect: NoSchedule + value: 'true' + key: node-role.kubernetes.io/controlplane +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +args: + - --configure-cloud-routes=false + - --use-service-account-credentials=true + - --v=2 + - --cloud-provider=aws +clusterRoleRules: + - apiGroups: + - "" + resources: + - events + verbs: + - create + - patch + - update + - apiGroups: + - "" + resources: + - nodes + verbs: + - '*' + - apiGroups: + - "" + resources: + - nodes/status + verbs: + - patch + - apiGroups: + - "" + resources: + - services + verbs: + - list + - patch + - update + - watch + - apiGroups: + - "" + resources: + - services/status + verbs: + - list + - patch + - update + - watch + - apiGroups: + - '' + resources: + - serviceaccounts + verbs: + - create + - get + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - update + - watch + - apiGroups: + - "" + resources: + - endpoints + verbs: + - create + - get + - list + - watch + - update + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - get + - list + - watch + - update + - apiGroups: + - "" + resources: + - serviceaccounts/token + verbs: + - create +``` + +3. Install the Helm chart: + +```shell +helm upgrade --install aws-cloud-controller-manager -n kube-system aws-cloud-controller-manager/aws-cloud-controller-manager --values values.yaml +``` + +Verify that the Helm chart installed successfully: + +```shell +helm status -n kube-system aws-cloud-controller-manager +``` + +4. If present, edit the Daemonset to remove the default node selector `node-role.kubernetes.io/control-plane: ""`: + +```shell +kubectl edit daemonset aws-cloud-controller-manager -n kube-system +``` + +5. (Optional) Verify that the cloud controller manager update succeeded: + +```shell +kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager +``` + + + + +### Helm Chart Installation from UI + + + + +1. Click **☰**, then select the name of the cluster from the left navigation. + +2. Select **Apps** > **Repositories**. + +3. Click the **Create** button. + +4. Enter `https://kubernetes.github.io/cloud-provider-aws` in the **Index URL** field. + +5. Select **Apps** > **Charts** from the left navigation and install **aws-cloud-controller-manager**. + +6. Select the namespace, `kube-system`, and enable **Customize Helm options before install**. + +7. Add the following container arguments: + +```yaml + - '--use-service-account-credentials=true' + - '--configure-cloud-routes=false' +``` + +8. Add `get` to `verbs` for `serviceaccounts` resources in `clusterRoleRules`. This allows the cloud controller manager to get service accounts upon startup. + +```yaml + - apiGroups: + - '' + resources: + - serviceaccounts + verbs: + - create + - get +``` + +9. Rancher-provisioned RKE nodes are tainted `node-role.kubernetes.io/controlplane`. Update tolerations and the nodeSelector: + +```yaml +tolerations: + - effect: NoSchedule + key: node.cloudprovider.kubernetes.io/uninitialized + value: 'true' + - effect: NoSchedule + value: 'true' + key: node-role.kubernetes.io/controlplane + +``` + +```yaml +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +``` + +:::note + +There's currently a [known issue](https://github.com/rancher/dashboard/issues/9249) where nodeSelector can't be updated from the Rancher UI. Continue installing the chart and then edit the Daemonset manually to set the `nodeSelector`: + +```yaml +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +``` + +::: + +10. Install the chart and confirm that the Daemonset `aws-cloud-controller-manager` is running. Verify `aws-cloud-controller-manager` pods are running in target namespace (`kube-system` unless modified in step 6). + + + + + +1. Click **☰**, then select the name of the cluster from the left navigation. + +2. Select **Apps** > **Repositories**. + +3. Click the **Create** button. + +4. Enter `https://kubernetes.github.io/cloud-provider-aws` in the **Index URL** field. + +5. Select **Apps** > **Charts** from the left navigation and install **aws-cloud-controller-manager**. + +6. Select the namespace, `kube-system`, and enable **Customize Helm options before install**. + +7. Add the following container arguments: + +```yaml + - '--use-service-account-credentials=true' + - '--configure-cloud-routes=false' +``` + +8. Add `get` to `verbs` for `serviceaccounts` resources in `clusterRoleRules`. This allows the cloud controller manager to get service accounts upon startup: + +```yaml + - apiGroups: + - '' + resources: + - serviceaccounts + verbs: + - create + - get +``` + +9. Rancher-provisioned RKE nodes are tainted `node-role.kubernetes.io/controlplane`. Update tolerations and the nodeSelector: + +```yaml +tolerations: + - effect: NoSchedule + key: node.cloudprovider.kubernetes.io/uninitialized + value: 'true' + - effect: NoSchedule + value: 'true' + key: node-role.kubernetes.io/controlplane + +``` + +```yaml +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +``` + +:::note + +There's currently a [known issue](https://github.com/rancher/dashboard/issues/9249) where `nodeSelector` can't be updated from the Rancher UI. Continue installing the chart and then Daemonset manually to set the `nodeSelector`: + +``` yaml +nodeSelector: + node-role.kubernetes.io/controlplane: 'true' +``` + +::: + +10. Install the chart and confirm that the Daemonset `aws-cloud-controller-manager` deploys successfully: + +```shell +kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager +``` + + + diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon.md new file mode 100644 index 000000000000..c65bef6ec155 --- /dev/null +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon.md @@ -0,0 +1,196 @@ +--- +title: Migrating Amazon In-tree to Out-of-tree +--- + + + + + +Kubernetes is moving away from maintaining cloud providers in-tree. In Kubernetes 1.27 and later, the in-tree cloud providers have been removed. + +You can migrate from an in-tree to an out-of-tree AWS cloud provider on Kubernetes 1.26 and earlier. All existing clusters must migrate prior to upgrading to v1.27 in order to stay functional. + +To migrate from the in-tree cloud provider to the out-of-tree AWS cloud provider, you must stop the existing cluster's kube controller manager and install the AWS cloud controller manager. There are many ways to do this. Refer to the official AWS documentation on the [external cloud controller manager](https://cloud-provider-aws.sigs.k8s.io/getting_started/) for details. + +If it's acceptable to have some downtime, you can [switch to an external cloud provider](./amazon.md#using-the-out-of-tree-aws-cloud-provider-for-rke), which removes in-tree components and then deploy charts to install the AWS cloud controller manager. + +If your setup can't tolerate any control plane downtime, you must enable leader migration. This facilitates a smooth transition from the controllers in the kube controller manager to their counterparts in the cloud controller manager. Refer to the official AWS documentation on [Using leader migration](https://cloud-provider-aws.sigs.k8s.io/getting_started/) for more details. + +:::note Important: +The Kubernetes [cloud controller migration documentation](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/#before-you-begin) states that it's possible to migrate with the same Kubernetes version, but assumes that the migration is part of a Kubernetes upgrade. Refer to the Kubernetes documentation on [migrating to use the cloud controller manager](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/) to see if you need to customize your setup before migrating. Confirm your [migration configuration values](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/#default-configuration). If your cloud provider provides an implementation of the Node IPAM controller, you also need to [migrate the IPAM controller](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/#node-ipam-controller-migration). +::: + + + + +1. Update the cluster config to enable leader migration: + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + kube-controller-manager-arg: + - enable-leader-migration + machineLabelSelector: + matchExpressions: + - key: rke.cattle.io/control-plane-role + operator: In + values: + - 'true' +``` + +Note that the cloud provider is still `aws` at this step: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + cloud-provider-name: aws +``` + +2. Cordon control plane nodes so that AWS cloud controller pods run on nodes only after upgrading to the external cloud provider: + +```shell +kubectl cordon -l "node-role.kubernetes.io/controlplane=true" +``` + +3. To install the AWS cloud controller manager with leader migration enabled, follow Steps 1-3 for [deploying the cloud controller manager chart](./amazon.md#using-out-of-tree-aws-cloud-provider-for-rke2) +From Kubernetes 1.22 onwards, the kube-controller-manager will utilize a default configuration which will satisfy the controller-to-manager migration. +Update container args of the `aws-cloud-controller-manager` under `spec.rkeConfig.additionalManifest` to enable leader migration: + +```shell +- '--enable-leader-migration=true' +``` + +4. Install the chart and confirm that the Daemonset `aws-cloud-controller-manager` successfully deployed: + +```shell +kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager +``` + +5. Update the provisioning cluster to change the cloud provider and remove leader migration args from the kube controller. +If upgrading the Kubernetes version, set the Kubernetes version as well in the `spec.kubernetesVersion` section of the cluster YAML file + +:::note Important + +Only remove `cloud-provider-name: aws` if not relying on the rke2 supervisor to correctly set the providerID. + +::: + +Remove `enable-leader-migration` if you don't want it enabled in your cluster: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + cloud-provider-name: external +``` + +Remove `enable-leader-migration` from: + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + kube-controller-manager-arg: + - enable-leader-migration + machineLabelSelector: + matchExpressions: + - key: rke.cattle.io/control-plane-role + operator: In + values: + - 'true' +``` + +:::tip +You can also disable leader migration after the upgrade, as leader migration is no longer required due to only one cloud-controller-manager and can be removed. +Upgrade the chart and remove the following section from the container arguments: + +```yaml +- --enable-leader-migration=true +``` +::: + +Verify the cloud controller manager update was successfully rolled out with the following command: + +```shell +kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager +``` + +6. The cloud provider is responsible for setting the ProviderID of the node. Check if all nodes are initialized with the ProviderID: + +```shell +kubectl describe nodes | grep "ProviderID" +``` + + + + + +1. Update the cluster config to enable leader migration in `cluster.yml`: + +```yaml +services: + kube-controller: + extra_args: + enable-leader-migration: "true" +``` + +Note that the cloud provider is still `aws` at this step: + +```yaml +cloud_provider: + name: aws +``` + +2. Cordon the control plane nodes, so that AWS cloud controller pods run on nodes only after upgrading to the external cloud provider: + +```shell +kubectl cordon -l "node-role.kubernetes.io/controlplane=true" +``` + +3. To install the AWS cloud controller manager, you must enable leader migration and follow the same steps as when installing AWS on a new cluster. To enable leader migration, add the following to the container arguments in step 7 while following the [steps to install the chart](./amazon.md#helm-chart-installation-from-ui-for-rke): + +```yaml +- '--enable-leader-migration=true' +``` + +4. Confirm that the chart is installed but that the new pods aren't running yet due to cordoned controlplane nodes. After updating the cluster in the next step, RKE will upgrade and uncordon each node, and schedule `aws-controller-manager` pods. + +5. Update `cluster.yml` to change the cloud provider and remove the leader migration arguments from the kube-controller. + + Selecting **External Amazon (out-of-tree)** sets `--cloud-provider=external` and lets you enable `useInstanceMetadataHostname`. You must enable `useInstanceMetadataHostname` for node-driver clusters and for custom clusters if not you don't provide a custom node name via `--node-name`. Enabling `useInstanceMetadataHostname` will query ec2 metadata service and set `/hostname` as `hostname-override` for `kubelet` and `kube-proxy`: + +```yaml +rancher_kubernetes_engine_config: + cloud_provider: + name: external-aws + useInstanceMetadataHostname: true/false +``` + + Remove `enable-leader-migration` if you don't want it enabled in your cluster: + + ```yaml + services: + kube-controller: + extra_args: + enable-leader-migration: "true" + ``` + +:::tip +You can also disable leader migration after you finish the migration. Upgrade the chart and remove the following section from the container arguments: + +```yaml +- --enable-leader-migration=true +``` +::: + +6. If you're upgrading the cluster's Kubernetes version, set the Kubernetes version as well. + +7. Update the cluster. The `aws-cloud-controller-manager` pods should now be running. + + + + diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere.md similarity index 97% rename from versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree.md rename to versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere.md index d302213118ac..a3bc9b89d2db 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere.md @@ -1,5 +1,5 @@ --- -title: Migrating vSphere In-tree Volumes to Out-of-tree +title: Migrating vSphere In-tree to Out-of-tree --- @@ -64,7 +64,7 @@ Once all nodes are tainted by the running the script, launch the Helm vSphere CP 1. Click **☰ > Cluster Management**. 1. Go to the cluster where the vSphere CPI chart will be installed and click **Explore**. 1. Click **Apps > Charts**. -1. Click **vSphere CPI**.. +1. Click **vSphere CPI**. 1. Click **Install**. 1. Fill out the required vCenter details and click **Install**. @@ -81,7 +81,7 @@ kubectl describe nodes | grep "ProviderID" 1. Click **☰ > Cluster Management**. 1. Go to the cluster where the vSphere CSI chart will be installed and click **Explore**. 1. Click **Apps > Charts**. -1. Click **vSphere CSI**.. +1. Click **vSphere CSI**. 1. Click **Install**. 1. Fill out the required vCenter details and click **Install**. 1. Check **Customize Helm options before install** and click **Next**. diff --git a/versioned_sidebars/version-2.8-sidebars.json b/versioned_sidebars/version-2.8-sidebars.json index c75433939841..1237d7f7b409 100644 --- a/versioned_sidebars/version-2.8-sidebars.json +++ b/versioned_sidebars/version-2.8-sidebars.json @@ -464,11 +464,12 @@ }, "items": [ "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon", + "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/azure", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/google-compute-engine", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/configure-in-tree-vsphere", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/configure-out-of-tree-vsphere", - "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree" + "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere" ] }, "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters" From 7ea840eefe2914c691a45f7a1949b7b3413d8b1b Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 6 Dec 2023 11:50:53 -0800 Subject: [PATCH 53/65] Add 2.8 entry to versions table (#1002) --- src/pages/versions.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/src/pages/versions.md b/src/pages/versions.md index fde34a42f0f6..8d8301ade9ca 100644 --- a/src/pages/versions.md +++ b/src/pages/versions.md @@ -6,6 +6,17 @@ title: Rancher Documentation Versions ### Current versions +Below are the documentation and release notes for the currently released version of Rancher 2.8.x: + +
v2.7.8DocumentationRelease NotesSupport Matrix
v2.7.7 Documentation
+ + + + + + +
v2.8.0DocumentationRelease NotesSupport Matrix
+ Below are the documentation and release notes for the currently released version of Rancher 2.7.x: From 3b8defdbb1f16701190241d6f443f4d723994259 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 6 Dec 2023 11:51:10 -0800 Subject: [PATCH 54/65] Update webhook version table (#1006) * Update webhook version table * Bump 2.8 webhook version * Sync 2.8 table with latest table * Bump webhook version --- docs/reference-guides/rancher-webhook.md | 11 +++-------- .../version-2.7/reference-guides/rancher-webhook.md | 6 +++++- .../version-2.8/reference-guides/rancher-webhook.md | 11 +++-------- 3 files changed, 11 insertions(+), 17 deletions(-) diff --git a/docs/reference-guides/rancher-webhook.md b/docs/reference-guides/rancher-webhook.md index 800f3c92c9d6..afcf518b3009 100644 --- a/docs/reference-guides/rancher-webhook.md +++ b/docs/reference-guides/rancher-webhook.md @@ -15,16 +15,11 @@ Each Rancher version is designed to be compatible with a single version of the w **Note:** Rancher manages deployment and upgrade of the webhook. Under most circumstances, no user intervention should be needed to ensure that the webhook version is compatible with the version of Rancher that you are running. + + | Rancher Version | Webhook Version | |-----------------|:---------------:| -| v2.7.0 | v0.3.0 | -| v2.7.1 | v0.3.0 | -| v2.7.2 | v0.3.2 | -| v2.7.3 | v0.3.3 | -| v2.7.4 | v0.3.4 | -| v2.7.5 | v0.3.5 | -| v2.7.6 | v0.3.5 | - +| v2.8.0 | v0.4.2 | ## Why Do We Need It? diff --git a/versioned_docs/version-2.7/reference-guides/rancher-webhook.md b/versioned_docs/version-2.7/reference-guides/rancher-webhook.md index 06b89cabdd87..af8f7f95892a 100644 --- a/versioned_docs/version-2.7/reference-guides/rancher-webhook.md +++ b/versioned_docs/version-2.7/reference-guides/rancher-webhook.md @@ -15,6 +15,8 @@ Each Rancher version is designed to be compatible with a single version of the w **Note:** Rancher manages deployment and upgrade of the webhook. Under most circumstances, no user intervention should be needed to ensure that the webhook version is compatible with the version of Rancher that you are running. + + | Rancher Version | Webhook Version | |-----------------|:---------------:| | v2.7.0 | v0.3.0 | @@ -24,7 +26,9 @@ Each Rancher version is designed to be compatible with a single version of the w | v2.7.4 | v0.3.4 | | v2.7.5 | v0.3.5 | | v2.7.6 | v0.3.5 | - +| v2.7.7 | v0.3.6 | +| v2.7.8 | v0.3.6 | +| v2.7.9 | v0.3.6 | ## Why Do We Need It? diff --git a/versioned_docs/version-2.8/reference-guides/rancher-webhook.md b/versioned_docs/version-2.8/reference-guides/rancher-webhook.md index 800f3c92c9d6..afcf518b3009 100644 --- a/versioned_docs/version-2.8/reference-guides/rancher-webhook.md +++ b/versioned_docs/version-2.8/reference-guides/rancher-webhook.md @@ -15,16 +15,11 @@ Each Rancher version is designed to be compatible with a single version of the w **Note:** Rancher manages deployment and upgrade of the webhook. Under most circumstances, no user intervention should be needed to ensure that the webhook version is compatible with the version of Rancher that you are running. + + | Rancher Version | Webhook Version | |-----------------|:---------------:| -| v2.7.0 | v0.3.0 | -| v2.7.1 | v0.3.0 | -| v2.7.2 | v0.3.2 | -| v2.7.3 | v0.3.3 | -| v2.7.4 | v0.3.4 | -| v2.7.5 | v0.3.5 | -| v2.7.6 | v0.3.5 | - +| v2.8.0 | v0.4.2 | ## Why Do We Need It? From 437ab199daf36d3f34ea1f73e66f573ad288703c Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Wed, 6 Dec 2023 16:17:31 -0500 Subject: [PATCH 55/65] correcting capitalization: helm > Helm (#1018) --- .../upgrades.md | 2 +- .../rancher-behind-an-http-proxy/install-rancher.md | 4 ++-- .../migrate-rancher-to-new-cluster.md | 2 +- .../deploy-apps-across-clusters/fleet.md | 2 +- .../helm-charts-in-rancher/create-apps.md | 2 +- docs/integrations-in-rancher/fleet/overview.md | 2 +- docs/pages-for-subheaders/helm-charts-in-rancher.md | 2 +- .../monitoring-best-practices.md | 4 ++-- .../monitoring-v2-configuration/helm-chart-options.md | 8 +++----- docs/reference-guides/rancher-webhook.md | 2 +- src/components/HomepageFeatures/poc-index.tsx | 2 +- versioned_docs/version-2.0-2.4/faq/technical-items.md | 4 ++-- .../air-gap-helm2/install-rancher.md | 11 +++++++++-- .../helm2/helm-rancher/chart-options.md | 2 +- .../helm2/rke-add-on/api-auditing.md | 2 +- .../helm2/rke-add-on/layer-4-lb/nlb.md | 2 +- .../helm2/rke-add-on/layer-7-lb/alb.md | 4 ++-- .../helm2/rke-add-on/layer-7-lb/nginx.md | 2 +- .../advanced-use-cases/helm2/rke-add-on/proxy.md | 2 +- .../rke-add-on/troubleshooting/404-default-backend.md | 2 +- .../troubleshooting/generic-troubleshooting.md | 2 +- .../rke-add-on/troubleshooting/job-complete-status.md | 2 +- .../advanced-use-cases/rke-add-on/layer-4-lb.md | 4 ++-- .../advanced-use-cases/rke-add-on/layer-7-lb.md | 2 +- .../upgrades/helm2.md | 2 +- .../rancher-behind-an-http-proxy/install-rancher.md | 4 ++-- .../helm-charts-in-rancher/adding-catalogs.md | 2 +- .../helm-charts-in-rancher/catalog-config.md | 2 +- .../helm-charts-in-rancher/creating-apps.md | 2 +- .../pages-for-subheaders/helm2-helm-init.md | 2 +- .../helm2-rke-add-on-layer-4-lb.md | 2 +- .../helm2-rke-add-on-layer-7-lb.md | 2 +- .../helm2-rke-add-on-troubleshooting.md | 2 +- .../pages-for-subheaders/helm2-rke-add-on.md | 4 ++-- .../version-2.0-2.4/pages-for-subheaders/helm2.md | 2 +- .../version-2.0-2.4/pages-for-subheaders/upgrades.md | 6 +++--- .../installation-references/helm-chart-options.md | 2 +- .../upgrades.md | 2 +- .../rancher-behind-an-http-proxy/install-rancher.md | 4 ++-- .../migrate-rancher-to-new-cluster.md | 2 +- .../deploy-apps-across-clusters/fleet.md | 2 +- .../pages-for-subheaders/fleet-gitops-at-scale.md | 2 +- .../pages-for-subheaders/helm-charts-in-rancher.md | 4 ++-- .../monitoring-best-practices.md | 4 ++-- .../installation-references/helm-chart-options.md | 2 +- .../monitoring-v2-configuration/helm-chart-options.md | 8 +++----- .../upgrades.md | 2 +- .../rancher-behind-an-http-proxy/install-rancher.md | 4 ++-- .../migrate-rancher-to-new-cluster.md | 2 +- .../deploy-apps-across-clusters/fleet.md | 2 +- .../helm-charts-in-rancher/create-apps.md | 2 +- .../pages-for-subheaders/fleet-gitops-at-scale.md | 2 +- .../pages-for-subheaders/helm-charts-in-rancher.md | 2 +- .../monitoring-best-practices.md | 4 ++-- .../monitoring-v2-configuration/helm-chart-options.md | 8 +++----- .../upgrades.md | 2 +- .../rancher-behind-an-http-proxy/install-rancher.md | 4 ++-- .../migrate-rancher-to-new-cluster.md | 2 +- .../deploy-apps-across-clusters/fleet.md | 2 +- .../helm-charts-in-rancher/create-apps.md | 2 +- .../pages-for-subheaders/fleet-gitops-at-scale.md | 2 +- .../pages-for-subheaders/helm-charts-in-rancher.md | 2 +- .../monitoring-best-practices.md | 4 ++-- .../monitoring-v2-configuration/helm-chart-options.md | 8 +++----- .../version-2.7/reference-guides/rancher-webhook.md | 2 +- .../upgrades.md | 2 +- .../rancher-behind-an-http-proxy/install-rancher.md | 4 ++-- .../migrate-rancher-to-new-cluster.md | 2 +- .../deploy-apps-across-clusters/fleet.md | 2 +- .../helm-charts-in-rancher/create-apps.md | 2 +- .../integrations-in-rancher/fleet/overview.md | 2 +- .../pages-for-subheaders/helm-charts-in-rancher.md | 2 +- .../monitoring-best-practices.md | 4 ++-- .../monitoring-v2-configuration/helm-chart-options.md | 8 +++----- .../version-2.8/reference-guides/rancher-webhook.md | 2 +- 75 files changed, 111 insertions(+), 114 deletions(-) diff --git a/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md b/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md index 7d5f62e329e3..824683b1ea83 100644 --- a/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md +++ b/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md @@ -55,7 +55,7 @@ You'll use the backup as a restore point if something goes wrong during upgrade. ### 2. Update the Helm chart repository -1. Update your local helm repo cache. +1. Update your local Helm repo cache. ``` helm repo update diff --git a/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md b/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md index 122b0686a870..9d4a4c8393e5 100644 --- a/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md +++ b/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md @@ -20,7 +20,7 @@ sudo ./get_helm.sh ### Install cert-manager -Add the cert-manager helm repository: +Add the cert-manager Helm repository: ``` helm repo add jetstack https://charts.jetstack.io @@ -63,7 +63,7 @@ kubectl rollout status deployment -n cert-manager cert-manager-webhook ### Install Rancher -Next you can install Rancher itself. First add the helm repository: +Next you can install Rancher itself. First, add the Helm repository: ``` helm repo add rancher-latest https://releases.rancher.com/server-charts/latest diff --git a/docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md b/docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md index 978d15a6ea4e..4b58db9ead31 100644 --- a/docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md +++ b/docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md @@ -32,7 +32,7 @@ Since Rancher can be installed on any Kubernetes cluster, you can use this backu ### 1. Install the rancher-backup Helm chart Install the [rancher-backup chart](https://github.com/rancher/backup-restore-operator/tags), using a version in the 2.x.x major version range: - 1. Add the helm repository: + 1. Add the Helm repository: ```bash helm repo add rancher-charts https://charts.rancher.io diff --git a/docs/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md b/docs/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md index e9d4dec7faa2..2a47a9cdcfb7 100644 --- a/docs/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md +++ b/docs/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md @@ -54,7 +54,7 @@ For details on using Fleet behind a proxy, see [this page.](../../../integration In order for Helm charts with dependencies to deploy successfully, you must run a manual command (as listed below), as it is up to the user to fulfill the dependency list. If you do not do this and proceed to clone your repository and run `helm install`, your installation will fail because the dependencies will be missing. -The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters. +The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` or run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters. ## Troubleshooting diff --git a/docs/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md b/docs/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md index 300e152bb22e..43dfff354280 100644 --- a/docs/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md +++ b/docs/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md @@ -22,7 +22,7 @@ Native Helm charts include an application along with other software required to ### Rancher Charts -Rancher charts are native helm charts with two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts.](#additional-files-for-rancher-charts) +Rancher charts are native Helm charts with two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts.](#additional-files-for-rancher-charts) Rancher charts add simplified chart descriptions and configuration forms to make the application deployment easy. Rancher users do not need to read through the entire list of Helm variables to understand how to launch an application. diff --git a/docs/integrations-in-rancher/fleet/overview.md b/docs/integrations-in-rancher/fleet/overview.md index b7e1806fb586..2c21eaa0ae48 100644 --- a/docs/integrations-in-rancher/fleet/overview.md +++ b/docs/integrations-in-rancher/fleet/overview.md @@ -51,7 +51,7 @@ For details on using Fleet behind a proxy, see the [Using Fleet Behind a Proxy]( In order for Helm charts with dependencies to deploy successfully, you must run a manual command (as listed below), as it is up to the user to fulfill the dependency list. If you do not do this and proceed to clone your repository and run `helm install`, your installation will fail because the dependencies will be missing. -The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters +The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` or run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters ## Troubleshooting diff --git a/docs/pages-for-subheaders/helm-charts-in-rancher.md b/docs/pages-for-subheaders/helm-charts-in-rancher.md index f0d7ba63fcae..509dc5ea2b97 100644 --- a/docs/pages-for-subheaders/helm-charts-in-rancher.md +++ b/docs/pages-for-subheaders/helm-charts-in-rancher.md @@ -74,7 +74,7 @@ Apps managed by the Cluster Manager (the global view in the legacy Rancher UI) s From the left sidebar select _"Repositories"_. -These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. +These items represent Helm repositories, and can be either traditional Helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. To add a private CA for Helm Chart repositories: diff --git a/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index a6e34b03a505..19609e66a854 100644 --- a/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -66,7 +66,7 @@ In general, you want to scrape data from all the workloads running in your clust ### About Prometheus Exporters -A lot of 3rd party workloads like databases, queues or web-servers either already support exposing metrics in a Prometheus format, or there are so called exporters available that translate between the tool's metrics and the format that Prometheus understands. Usually you can add these exporters as additional sidecar containers to the workload's Pods. A lot of helm charts already include options to deploy the correct exporter. Additionally you can find a curated list of exports by SysDig on [promcat.io](https://promcat.io/) and on [ExporterHub](https://exporterhub.io/). +Many 3rd party workloads, such as databases, queues, and web-servers, already support exposing metrics in a Prometheus format, or offer exporters that translate between the tool's metrics and a format that Prometheus understands. You can usually add these exporters as additional sidecar containers to the workload's Pods. Many Helm charts already include options to deploy the correct exporter. You can find a curated list of exports by SysDig on [promcat.io](https://promcat.io/) and on [ExporterHub](https://exporterhub.io/). ### Prometheus support in Programming Languages and Frameworks @@ -74,7 +74,7 @@ To get your own custom application metrics into Prometheus, you have to collect ### ServiceMonitors and PodMonitors -Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. A lot of helm charts already include an option to create these monitors directly. You can also find more information in the Rancher documentation. +Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. Many Helm charts let you to create these monitors directly. You can also find more information in the Rancher documentation. ### Prometheus Push Gateway diff --git a/docs/reference-guides/monitoring-v2-configuration/helm-chart-options.md b/docs/reference-guides/monitoring-v2-configuration/helm-chart-options.md index 112c41c9ea08..42345561b204 100644 --- a/docs/reference-guides/monitoring-v2-configuration/helm-chart-options.md +++ b/docs/reference-guides/monitoring-v2-configuration/helm-chart-options.md @@ -49,13 +49,11 @@ An example of where this might be used is with Istio. For more information, see ## Configuring Applications Packaged within Monitoring v2 -We deploy kube-state-metrics and node-exporter with monitoring v2. Node exporter are deployed as DaemonSets. In the monitoring v2 helm chart, in the values.yaml, each of the things are deployed as sub charts. +We deploy kube-state-metrics and node-exporter with monitoring v2. The node exporters are deployed as DaemonSets. Each of these entities are deployed as sub-charts through the monitoring v2 Helm chart, values.yaml. -We also deploy grafana which is not managed by prometheus. +We also deploy Grafana, which is not managed by Prometheus. -If you look at what the helm chart is doing like in kube-state-metrics, there are plenty more values that you can set that aren’t exposed in the top level chart. - -But in the top level chart you can add values that override values that exist in the sub chart. +Many values aren’t exposed in the top level chart. However, you can add values to the top level chart to override values that exist in the sub-charts. ### Increase the Replicas of Alertmanager diff --git a/docs/reference-guides/rancher-webhook.md b/docs/reference-guides/rancher-webhook.md index afcf518b3009..0060448992de 100644 --- a/docs/reference-guides/rancher-webhook.md +++ b/docs/reference-guides/rancher-webhook.md @@ -60,7 +60,7 @@ helm upgrade --reuse-values rancher-webhook rancher-charts/rancher-webhook -n c ``` **Note:** This temporary workaround may violate an environment's security policy. This workaround also requires that port 9443 is unused on the host network. -**Note:** Helm uses secrets by default. This is a datatype that some webhook versions validate to store information. In these cases, directly update the deployment with the hostNetwork=true value using kubectl, then run the helm commands listed above to avoid drift between the helm configuration and the actual state in the cluster. +**Note:** Helm uses secrets by default. This is a datatype that some webhook versions validate to store information. In these cases, directly update the deployment with the hostNetwork=true value using kubectl, then run the Helm commands listed above to prevent drift between the Helm configuration and the actual state of the cluster. ### Private GKE Cluster diff --git a/src/components/HomepageFeatures/poc-index.tsx b/src/components/HomepageFeatures/poc-index.tsx index 9a0736c15065..75d93bacca6f 100644 --- a/src/components/HomepageFeatures/poc-index.tsx +++ b/src/components/HomepageFeatures/poc-index.tsx @@ -44,7 +44,7 @@ export default function HomepageFeatures(): JSX.Element { title="Install Rancher" icon={} to="/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli" - description="Quick way to helm install Rancher in a Kubernetes cluster" + description="Quick way to Helm install Rancher in a Kubernetes cluster" /> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. Kubernetes install (RKE add-on): ``` @@ -54,7 +54,7 @@ New password for default administrator (user-xxxxx): > #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. Kubernetes install (RKE add-on): ``` diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/air-gap-helm2/install-rancher.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/air-gap-helm2/install-rancher.md index d8f931876c92..8b01048f6603 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/air-gap-helm2/install-rancher.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/air-gap-helm2/install-rancher.md @@ -13,12 +13,19 @@ Rancher recommends installing Rancher on a Kubernetes cluster. A highly availabl From a system that has access to the internet, fetch the latest Helm chart and copy the resulting manifests to a system that has access to the Rancher server cluster. -1. If you haven't already, initialize `helm` locally on a workstation that has internet access. Note: Refer to the [Helm version requirements](../../../resources/choose-a-rancher-version.md) to choose a version of Helm to install Rancher. +1. If you haven't already, initialize Helm locally on a workstation that has internet access. + +:::note + +Refer to the [Helm version requirements](../../../resources/choose-a-rancher-version.md) to choose a version of Helm to install Rancher. + +::: + ```plain helm init -c ``` -2. Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher](../../../resources/choose-a-rancher-version.md). +2. Use the `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher](../../../resources/choose-a-rancher-version.md). - Latest: Recommended for trying out the newest features ``` helm repo add rancher-latest https://releases.rancher.com/server-charts/latest diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/helm-rancher/chart-options.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/helm-rancher/chart-options.md index 0587d40c2b60..9e976ffe7650 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/helm-rancher/chart-options.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/helm-rancher/chart-options.md @@ -114,7 +114,7 @@ Example on setting a static proxy header with `ingress.configurationSnippet`. Th ### HTTP Proxy -Rancher requires internet access for some functionality (helm charts). Use `proxy` to set your proxy server. +Rancher requires internet access for some functionality, such as Helm charts. Use `proxy` to set your proxy server. Add your IP exceptions to the `noProxy` list. Make sure you add the Service cluster IP range (default: 10.43.0.1/16) and any worker cluster `controlplane` nodes. Rancher supports CIDR notation ranges in this list. diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/api-auditing.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/api-auditing.md index 295410613a78..3b3e38a41a04 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/api-auditing.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/api-auditing.md @@ -6,7 +6,7 @@ title: Enable API Auditing > >Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../../../../resources/choose-a-rancher-version.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. If you're using RKE to install Rancher, you can use directives to enable API Auditing for your Rancher install. You can know what happened, when it happened, who initiated it, and what cluster it affected. API auditing records all requests and responses to and from the Rancher API, which includes use of the Rancher UI and any other use of the Rancher API through programmatic use. diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-4-lb/nlb.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-4-lb/nlb.md index e917c2eede80..70a27126e2f2 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-4-lb/nlb.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-4-lb/nlb.md @@ -6,7 +6,7 @@ title: Amazon NLB Configuration > >Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../../../../../resources/helm-version-requirements.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a High-availability Kubernetes install with an RKE add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a High-availability Kubernetes install with an RKE add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. ## Objectives diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-7-lb/alb.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-7-lb/alb.md index b62a9f788307..5ed87a205c89 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-7-lb/alb.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-7-lb/alb.md @@ -4,9 +4,9 @@ title: Amazon ALB Configuration > #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** > ->Please use the Rancher helm chart to install Kubernetes Rancher. For details, see the [Kubernetes Install ](../../../../../resources/choose-a-rancher-version.md). +>Please use the Rancher Helm chart to install Kubernetes Rancher. For details, see the [Kubernetes Install ](../../../../../resources/choose-a-rancher-version.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. ## Objectives diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-7-lb/nginx.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-7-lb/nginx.md index 1b2b61808d12..05455bb1197b 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-7-lb/nginx.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/layer-7-lb/nginx.md @@ -6,7 +6,7 @@ title: NGINX Configuration > >Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../../../../../resources/choose-a-rancher-version.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. ## Install NGINX diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/proxy.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/proxy.md index 2b8aea62d74f..6c91939d6619 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/proxy.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/proxy.md @@ -6,7 +6,7 @@ title: HTTP Proxy Configuration > >Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../../../../resources/choose-a-rancher-version.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below. diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/404-default-backend.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/404-default-backend.md index d9f6d5c4385a..2c618b726e1c 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/404-default-backend.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/404-default-backend.md @@ -6,7 +6,7 @@ title: 404 - default backend > >Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../../../../../resources/helm-version-requirements.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. To debug issues around this error, you will need to download the command-line tool `kubectl`. See [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) how to download `kubectl` for your platform. diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/generic-troubleshooting.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/generic-troubleshooting.md index d86888b15f9e..9b51190c7c15 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/generic-troubleshooting.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/generic-troubleshooting.md @@ -6,7 +6,7 @@ title: Generic troubleshooting > >Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../../../../../resources/helm-version-requirements.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. Below are steps that you can follow to determine what is wrong in your cluster. diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/job-complete-status.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/job-complete-status.md index 5558c1182866..4c2b6b0be4ff 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/job-complete-status.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/rke-add-on/troubleshooting/job-complete-status.md @@ -6,7 +6,7 @@ title: Failed to get job complete status > >Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../../../../../resources/helm-version-requirements.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. To debug issues around this error, you will need to download the command-line tool `kubectl`. See [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) how to download `kubectl` for your platform. diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/rke-add-on/layer-4-lb.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/rke-add-on/layer-4-lb.md index e276e9df5062..fb97bf439691 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/rke-add-on/layer-4-lb.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/rke-add-on/layer-4-lb.md @@ -5,9 +5,9 @@ import SSlFaqHa from '@site/src/components/SslFaqHa' > #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** > ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install](../../../../../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md). +>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install](../../../../../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on: diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/rke-add-on/layer-7-lb.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/rke-add-on/layer-7-lb.md index 5d59ff2e749e..263744899555 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/rke-add-on/layer-7-lb.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/rke-add-on/layer-7-lb.md @@ -7,7 +7,7 @@ import SslFaqHa from '@site/src/components/SslFaqHa' > >Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install](../../../../../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../../../install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on: diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/helm2.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/helm2.md index 78e006ea65ff..e82caffba4ac 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/helm2.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/helm2.md @@ -45,7 +45,7 @@ of your Kubernetes cluster running Rancher server. You'll use the snapshot as a ### B. Update the Helm chart repository -1. Update your local helm repo cache. +1. Update your local Helm repo cache. ``` helm repo update diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md index 58d5ba78a1ff..82b91daa23e3 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md @@ -12,7 +12,7 @@ Now that you have a running RKE cluster, you can install Rancher in it. For secu ### Install cert-manager -Add the cert-manager helm repository: +Add the cert-manager Helm repository: ``` helm repo add jetstack https://charts.jetstack.io @@ -49,7 +49,7 @@ kubectl rollout status deployment -n cert-manager cert-manager-webhook ### Install Rancher -Next you can install Rancher itself. First add the helm repository: +Next you can install Rancher itself. First add the Helm repository: ``` helm repo add rancher-latest https://releases.rancher.com/server-charts/latest diff --git a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/adding-catalogs.md b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/adding-catalogs.md index 5797dc5034e7..a3eebe8b4651 100644 --- a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/adding-catalogs.md +++ b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/adding-catalogs.md @@ -17,7 +17,7 @@ The Git URL needs to be one that `git clone` [can handle](https://git-scm.com/do A Helm chart repository is an HTTP server that houses one or more packaged charts. Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server. -Helm comes with built-in package server for developer testing (helm serve). The Helm team has tested other servers, including Google Cloud Storage with website mode enabled, S3 with website mode enabled or hosting custom chart repository server using open-source projects like [ChartMuseum](https://github.com/helm/chartmuseum). +Helm comes with Helm serve, a built-in package server for developer testing. The Helm team has tested other servers, including Google Cloud Storage with website mode enabled, S3 with website mode enabled or hosting custom chart repository server using open-source projects like [ChartMuseum](https://github.com/helm/chartmuseum). In Rancher, you can add the custom Helm chart repository with only a catalog name and the URL address of the chart repository. diff --git a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/catalog-config.md b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/catalog-config.md index e82cbf7dad3f..bd49676da809 100644 --- a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/catalog-config.md +++ b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/catalog-config.md @@ -27,7 +27,7 @@ The Git URL needs to be one that `git clone` [can handle](https://git-scm.com/do A Helm chart repository is an HTTP server that contains one or more packaged charts. Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server. -Helm comes with a built-in package server for developer testing (`helm serve`). The Helm team has tested other servers, including Google Cloud Storage with website mode enabled, S3 with website mode enabled or hosting custom chart repository server using open-source projects like [ChartMuseum](https://github.com/helm/chartmuseum). +Helm comes with Helm serve, a built-in package server for developer testing. The Helm team has tested other servers, including Google Cloud Storage with website mode enabled, S3 with website mode enabled or hosting custom chart repository server using open-source projects like [ChartMuseum](https://github.com/helm/chartmuseum). In Rancher, you can add the custom Helm chart repository with only a catalog name and the URL address of the chart repository. diff --git a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/creating-apps.md b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/creating-apps.md index beeea68d6794..06a40ab3e0ea 100644 --- a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/creating-apps.md +++ b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/creating-apps.md @@ -23,7 +23,7 @@ The Helm Stable and Helm Incubators are populated with native Helm charts. Howev ### Rancher Charts -Rancher charts mirror native helm charts, although they add two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts.](#additional-files-for-rancher-charts) +Rancher charts mirror native Helm charts, although they add two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts](#additional-files-for-rancher-charts). Advantages of Rancher charts include: diff --git a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-helm-init.md b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-helm-init.md index 941afd957ff2..aec6c5df0196 100644 --- a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-helm-init.md +++ b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-helm-init.md @@ -38,7 +38,7 @@ helm init --service-account tiller \ --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller: ``` -> **Note:** This`tiller`install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements. +> **Note:** This`tiller`install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. See the [Helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for instructions on restricting `tiller` access to suit your security requirements. ### Test your Tiller installation diff --git a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-layer-4-lb.md b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-layer-4-lb.md index 3316f7885628..14d0648b7d27 100644 --- a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-layer-4-lb.md +++ b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-layer-4-lb.md @@ -5,7 +5,7 @@ import SslFaqHa from '@site/src/components/SslFaqHa' > #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** > ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../getting-started/installation-and-upgrade/resources/helm-version-requirements.md). +>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../getting-started/installation-and-upgrade/resources/helm-version-requirements.md). > >If you are currently using the RKE add-on install method, see [Migrating from a High-availability Kubernetes install with an RKE add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the Helm chart. diff --git a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-layer-7-lb.md b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-layer-7-lb.md index e6eb7d99bb4a..b84dd2315b5c 100644 --- a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-layer-7-lb.md +++ b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-layer-7-lb.md @@ -7,7 +7,7 @@ import SslFaqHa from '@site/src/components/SslFaqHa' > >Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../getting-started/installation-and-upgrade/resources/helm-version-requirements.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the Helm chart. This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on: diff --git a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-troubleshooting.md b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-troubleshooting.md index 54b60b9aa165..3ed29127e532 100644 --- a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-troubleshooting.md +++ b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on-troubleshooting.md @@ -6,7 +6,7 @@ title: Troubleshooting HA RKE Add-On Install > >Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../getting-started/installation-and-upgrade/resources/helm-version-requirements.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. This section contains common errors seen when setting up a Kubernetes installation. diff --git a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on.md b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on.md index 06e1519c959f..71df60b09fd5 100644 --- a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on.md +++ b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2-rke-add-on.md @@ -4,9 +4,9 @@ title: RKE Add-On Install > #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** > ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../getting-started/installation-and-upgrade/resources/helm-version-requirements.md). +>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../getting-started/installation-and-upgrade/resources/helm-version-requirements.md). > ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. * [Kubernetes installation with External Load Balancer (TCP/Layer 4)](../getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/rke-add-on/layer-4-lb.md) diff --git a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2.md b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2.md index 2335f5055f2f..1eb5569e9f63 100644 --- a/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2.md +++ b/versioned_docs/version-2.0-2.4/pages-for-subheaders/helm2.md @@ -53,6 +53,6 @@ The following CLI tools are required for this install. Please make sure these to > **Important: RKE add-on install is only supported up to Rancher v2.0.8** > -> Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../getting-started/installation-and-upgrade/resources/helm-version-requirements.md). +> Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](../getting-started/installation-and-upgrade/resources/helm-version-requirements.md). > > If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the Helm chart. diff --git a/versioned_docs/version-2.0-2.4/pages-for-subheaders/upgrades.md b/versioned_docs/version-2.0-2.4/pages-for-subheaders/upgrades.md index 33ce24ec5fd8..dfd482f4a3eb 100644 --- a/versioned_docs/version-2.0-2.4/pages-for-subheaders/upgrades.md +++ b/versioned_docs/version-2.0-2.4/pages-for-subheaders/upgrades.md @@ -74,7 +74,7 @@ You'll use the backup as a restoration point if something goes wrong during upgr ## 2. Update the Helm chart repository -1. Update your local helm repo cache. +1. Update your local Helm repo cache: ``` helm repo update @@ -281,6 +281,6 @@ Upgrading from v2.0.7 or earlier | Rancher introduced the `system` project, whic **Important: RKE add-on install is only supported up to Rancher v2.0.8** -Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install](install-upgrade-on-a-kubernetes-cluster.md). +Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install](install-upgrade-on-a-kubernetes-cluster.md). -If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to move to using the helm chart. +If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/migrating-from-rke-add-on.md) for details on how to start using the Helm chart. diff --git a/versioned_docs/version-2.0-2.4/reference-guides/installation-references/helm-chart-options.md b/versioned_docs/version-2.0-2.4/reference-guides/installation-references/helm-chart-options.md index 24568e4a170a..89c26641353b 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/installation-references/helm-chart-options.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/installation-references/helm-chart-options.md @@ -126,7 +126,7 @@ Example on setting a static proxy header with `ingress.configurationSnippet`. Th ### HTTP Proxy -Rancher requires internet access for some functionality (helm charts). Use `proxy` to set your proxy server. +Rancher requires internet access for some functionality, such as reaching remote Helm charts. Use `proxy` to set your proxy server. Add your IP exceptions to the `noProxy` list. Make sure you add the Pod cluster IP range (default: `10.42.0.0/16`), Service cluster IP range (default: `10.43.0.0/16`), the internal cluster domains (default: `.svc,.cluster.local`) and any worker cluster `controlplane` nodes. Rancher supports CIDR notation ranges in this list. diff --git a/versioned_docs/version-2.5/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md b/versioned_docs/version-2.5/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md index 4cd822b37e77..26f8eed8670b 100644 --- a/versioned_docs/version-2.5/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md +++ b/versioned_docs/version-2.5/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md @@ -65,7 +65,7 @@ You'll use the backup as a restoration point if something goes wrong during upgr ## 2. Update the Helm chart repository -1. Update your local helm repo cache. +1. Update your local Helm repo cache: ``` helm repo update diff --git a/versioned_docs/version-2.5/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md b/versioned_docs/version-2.5/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md index 19563bf8b177..5994e56b00ac 100644 --- a/versioned_docs/version-2.5/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md +++ b/versioned_docs/version-2.5/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md @@ -12,7 +12,7 @@ Now that you have a running RKE cluster, you can install Rancher in it. For secu ### Install cert-manager -Add the cert-manager helm repository: +Add the cert-manager Helm repository: ``` helm repo add jetstack https://charts.jetstack.io @@ -49,7 +49,7 @@ kubectl rollout status deployment -n cert-manager cert-manager-webhook ### Install Rancher -Next you can install Rancher itself. First add the helm repository: +Next you can install Rancher itself. First, add the Helm repository: ``` helm repo add rancher-latest https://releases.rancher.com/server-charts/latest diff --git a/versioned_docs/version-2.5/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md b/versioned_docs/version-2.5/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md index 5439d1e1fcbe..141f0fe99c7d 100644 --- a/versioned_docs/version-2.5/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md +++ b/versioned_docs/version-2.5/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md @@ -33,7 +33,7 @@ helm install rancher-backup-crd rancher-charts/rancher-backup-crd -n cattle-reso helm install rancher-backup rancher-charts/rancher-backup -n cattle-resources-system --version $CHART_VERSION ```
-For an **air-gapped environment**, use the option below to pull the `backup-restore-operator` image from your private registry when installing the rancher-backup-crd helm chart. +For an **air-gapped environment**, use the option below to pull the `backup-restore-operator` image from your private registry when installing the rancher-backup-crd Helm chart. ``` --set image.repository $REGISTRY/rancher/backup-restore-operator ``` diff --git a/versioned_docs/version-2.5/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md b/versioned_docs/version-2.5/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md index 88c507d5bd1d..826e1335ddaa 100644 --- a/versioned_docs/version-2.5/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md +++ b/versioned_docs/version-2.5/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md @@ -56,7 +56,7 @@ For details on using Fleet behind a proxy, see [this page.](../../../explanation In order for Helm charts with dependencies to deploy successfully, you must run a manual command (as listed below), as it is up to the user to fulfill the dependency list. If you do not do this and proceed to clone your repository and run `helm install`, your installation will fail because the dependencies will be missing. -The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters. +The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` or run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters. ## Troubleshooting --- diff --git a/versioned_docs/version-2.5/pages-for-subheaders/fleet-gitops-at-scale.md b/versioned_docs/version-2.5/pages-for-subheaders/fleet-gitops-at-scale.md index 08a84b90ccd5..72795dff4292 100644 --- a/versioned_docs/version-2.5/pages-for-subheaders/fleet-gitops-at-scale.md +++ b/versioned_docs/version-2.5/pages-for-subheaders/fleet-gitops-at-scale.md @@ -61,7 +61,7 @@ For details on using Fleet behind a proxy, see [this page](../explanations/integ In order for Helm charts with dependencies to deploy successfully, you must run a manual command (as listed below), as it is up to the user to fulfill the dependency list. If you do not do this and proceed to clone your repository and run `helm install`, your installation will fail because the dependencies will be missing. -The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters +The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` or run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters ## Troubleshooting diff --git a/versioned_docs/version-2.5/pages-for-subheaders/helm-charts-in-rancher.md b/versioned_docs/version-2.5/pages-for-subheaders/helm-charts-in-rancher.md index 7e71750965de..5f0ed127c9d9 100644 --- a/versioned_docs/version-2.5/pages-for-subheaders/helm-charts-in-rancher.md +++ b/versioned_docs/version-2.5/pages-for-subheaders/helm-charts-in-rancher.md @@ -14,7 +14,7 @@ In Rancher v2.5, the Apps and Marketplace feature replaced the catalog system. In the cluster manager, Rancher uses a catalog system to import bundles of charts and then uses those charts to either deploy custom helm applications or Rancher's tools such as Monitoring or Istio. The catalog system is still available in the cluster manager in Rancher v2.5, but it is deprecated. -Now in the Cluster Explorer, Rancher uses a similar but simplified version of the same system. Repositories can be added in the same way that catalogs were, but are specific to the current cluster. Rancher tools come as pre-loaded repositories which deploy as standalone helm charts. +Now in the Cluster Explorer, Rancher uses a similar but simplified version of the same system. Repositories can be added in the same way that catalogs were, but are specific to the current cluster. Rancher tools come as pre-loaded repositories which deploy as standalone Helm charts. ### Charts @@ -34,7 +34,7 @@ All three types are deployed and managed in the same way. From the left sidebar select _"Repositories"_. -These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. +These items represent Helm repositories, and can be either traditional Helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. To add a private CA for Helm Chart repositories: diff --git a/versioned_docs/version-2.5/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/versioned_docs/version-2.5/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index 3dbe0bd9c203..38750c4cb3ff 100644 --- a/versioned_docs/version-2.5/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/versioned_docs/version-2.5/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -67,7 +67,7 @@ In general, you want to scrape data from all the workloads running in your clust ### About Prometheus Exporters -A lot of 3rd party workloads like databases, queues or web-servers either already support exposing metrics in a Prometheus format, or there are so called exporters available that translate between the tool's metrics and the format that Prometheus understands. Usually you can add these exporters as additional sidecar containers to the workload's Pods. A lot of helm charts already include options to deploy the correct exporter. Additionally you can find a curated list of exports by SysDig on [promcat.io](https://promcat.io/) and on [ExporterHub](https://exporterhub.io/). +Many 3rd party workloads, such as databases, queues, and web-servers, already support exposing metrics in a Prometheus format, or offer exporters that translate between the tool's metrics and a format that Prometheus understands. You can usually add these exporters as additional sidecar containers to the workload's Pods. Many Helm charts already include options to deploy the correct exporter. You can find a curated list of exports by SysDig on [promcat.io](https://promcat.io/) and on [ExporterHub](https://exporterhub.io/). ### Prometheus support in Programming Languages and Frameworks @@ -75,7 +75,7 @@ To get your own custom application metrics into Prometheus, you have to collect ### ServiceMonitors and PodMonitors -Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. A lot of helm charts already include an option to create these monitors directly. You can also find more information in the Rancher documentation. +Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. Many Helm charts let you create these monitors directly. You can also find more information in the Rancher documentation. ### Prometheus Push Gateway diff --git a/versioned_docs/version-2.5/reference-guides/installation-references/helm-chart-options.md b/versioned_docs/version-2.5/reference-guides/installation-references/helm-chart-options.md index a9a03da1c429..9083e22b93a5 100644 --- a/versioned_docs/version-2.5/reference-guides/installation-references/helm-chart-options.md +++ b/versioned_docs/version-2.5/reference-guides/installation-references/helm-chart-options.md @@ -123,7 +123,7 @@ Example on setting a static proxy header with `ingress.configurationSnippet`. Th ### HTTP Proxy -Rancher requires internet access for some functionality (helm charts). Use `proxy` to set your proxy server. +Rancher requires internet access for some functionality, such as reaching remote Helm charts. Use `proxy` to set your proxy server. Add your IP exceptions to the `noProxy` list. Make sure you add the Pod cluster IP range (default: `10.42.0.0/16`), Service cluster IP range (default: `10.43.0.0/16`), the internal cluster domains (default: `.svc,.cluster.local`) and any worker cluster `controlplane` nodes. Rancher supports CIDR notation ranges in this list. diff --git a/versioned_docs/version-2.5/reference-guides/monitoring-v2-configuration/helm-chart-options.md b/versioned_docs/version-2.5/reference-guides/monitoring-v2-configuration/helm-chart-options.md index fb2f68e6460e..8f2d747654bf 100644 --- a/versioned_docs/version-2.5/reference-guides/monitoring-v2-configuration/helm-chart-options.md +++ b/versioned_docs/version-2.5/reference-guides/monitoring-v2-configuration/helm-chart-options.md @@ -49,13 +49,11 @@ An example of where this might be used is with Istio. For more information, see ## Configuring Applications Packaged within Monitoring v2 -We deploy kube-state-metrics and node-exporter with monitoring v2. Node exporter are deployed as DaemonSets. In the monitoring v2 helm chart, in the values.yaml, each of the things are deployed as sub charts. +We deploy kube-state-metrics and node-exporter with monitoring v2. The node exporters are deployed as DaemonSets. Each of these entities are deployed as sub-charts through the monitoring v2 Helm chart, values.yaml. -We also deploy grafana which is not managed by prometheus. +We also deploy Grafana, which is not managed by Prometheus. -If you look at what the helm chart is doing like in kube-state-metrics, there are plenty more values that you can set that aren’t exposed in the top level chart. - -But in the top level chart you can add values that override values that exist in the sub chart. +Many values aren’t exposed in the top level chart. However, you can add values to the top level chart to override values that exist in the sub-charts. ### Increase the Replicas of Alertmanager diff --git a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md index 218a07649c04..64e314a5a9bc 100644 --- a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md +++ b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md @@ -55,7 +55,7 @@ You'll use the backup as a restore point if something goes wrong during upgrade. ### 2. Update the Helm chart repository -1. Update your local helm repo cache. +1. Update your local Helm repo cache: ``` helm repo update diff --git a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md index 5d7add65575a..4b61202e979c 100644 --- a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md +++ b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md @@ -16,7 +16,7 @@ These installation instructions assume you are using Helm 3. ### Install cert-manager -Add the cert-manager helm repository: +Add the cert-manager Helm repository: ``` helm repo add jetstack https://charts.jetstack.io @@ -65,7 +65,7 @@ kubectl rollout status deployment -n cert-manager cert-manager-webhook ### Install Rancher -Next you can install Rancher itself. First add the helm repository: +Next you can install Rancher itself. First, add the Helm repository: ``` helm repo add rancher-latest https://releases.rancher.com/server-charts/latest diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md index 72b345cec03c..aef2bca7ef57 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md @@ -30,7 +30,7 @@ Since Rancher can be installed on any Kubernetes cluster, you can use this backu ### 1. Install the rancher-backup Helm chart Install the [rancher-backup chart](https://github.com/rancher/backup-restore-operator/tags), using a version in the 2.x.x major version range: - 1. Add the helm repository: + 1. Add the Helm repository: ```bash helm repo add rancher-charts https://charts.rancher.io diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md index 975231b10900..cf802a75378d 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md @@ -54,7 +54,7 @@ For details on using Fleet behind a proxy, see [this page.](../../../integration In order for Helm charts with dependencies to deploy successfully, you must run a manual command (as listed below), as it is up to the user to fulfill the dependency list. If you do not do this and proceed to clone your repository and run `helm install`, your installation will fail because the dependencies will be missing. -The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters. +The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` or run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters. ## Troubleshooting diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md index a4e4f9e0771d..92668f3411cb 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md @@ -26,7 +26,7 @@ Native Helm charts include an application along with other software required to ### Rancher Charts -Rancher charts are native helm charts with two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts.](#additional-files-for-rancher-charts) +Rancher charts are native Helm charts with two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts.](#additional-files-for-rancher-charts) Rancher charts add simplified chart descriptions and configuration forms to make the application deployment easy. Rancher users do not need to read through the entire list of Helm variables to understand how to launch an application. diff --git a/versioned_docs/version-2.6/pages-for-subheaders/fleet-gitops-at-scale.md b/versioned_docs/version-2.6/pages-for-subheaders/fleet-gitops-at-scale.md index 246348b89c85..a898f4130571 100644 --- a/versioned_docs/version-2.6/pages-for-subheaders/fleet-gitops-at-scale.md +++ b/versioned_docs/version-2.6/pages-for-subheaders/fleet-gitops-at-scale.md @@ -55,7 +55,7 @@ For details on using Fleet behind a proxy, see [this page](../integrations-in-ra In order for Helm charts with dependencies to deploy successfully, you must run a manual command (as listed below), as it is up to the user to fulfill the dependency list. If you do not do this and proceed to clone your repository and run `helm install`, your installation will fail because the dependencies will be missing. -The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters +The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` or run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters ## Troubleshooting diff --git a/versioned_docs/version-2.6/pages-for-subheaders/helm-charts-in-rancher.md b/versioned_docs/version-2.6/pages-for-subheaders/helm-charts-in-rancher.md index fc0d7e41f5dd..b53af8d8f6dc 100644 --- a/versioned_docs/version-2.6/pages-for-subheaders/helm-charts-in-rancher.md +++ b/versioned_docs/version-2.6/pages-for-subheaders/helm-charts-in-rancher.md @@ -82,7 +82,7 @@ Apps managed by the Cluster Manager (the global view in the legacy Rancher UI) s From the left sidebar select _"Repositories"_. -These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. +These items represent Helm repositories, and can be either traditional Helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. To add a private CA for Helm Chart repositories: diff --git a/versioned_docs/version-2.6/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/versioned_docs/version-2.6/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index 35f8af067e46..53cc4bbaed40 100644 --- a/versioned_docs/version-2.6/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/versioned_docs/version-2.6/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -66,7 +66,7 @@ In general, you want to scrape data from all the workloads running in your clust ### About Prometheus Exporters -A lot of 3rd party workloads like databases, queues or web-servers either already support exposing metrics in a Prometheus format, or there are so called exporters available that translate between the tool's metrics and the format that Prometheus understands. Usually you can add these exporters as additional sidecar containers to the workload's Pods. A lot of helm charts already include options to deploy the correct exporter. Additionally you can find a curated list of exports by SysDig on [promcat.io](https://promcat.io/) and on [ExporterHub](https://exporterhub.io/). +Many 3rd party workloads, such as databases, queues, and web-servers, already support exposing metrics in a Prometheus format, or offer exporters that translate between the tool's metrics and a format that Prometheus understands. You can usually add these exporters as additional sidecar containers to the workload's Pods. Many Helm charts already include options to deploy the correct exporter. You can find a curated list of exports by SysDig on [promcat.io](https://promcat.io/) and on [ExporterHub](https://exporterhub.io/). ### Prometheus support in Programming Languages and Frameworks @@ -74,7 +74,7 @@ To get your own custom application metrics into Prometheus, you have to collect ### ServiceMonitors and PodMonitors -Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. A lot of helm charts already include an option to create these monitors directly. You can also find more information in the Rancher documentation. +Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. Many Helm charts let you create these monitors directly. You can also find more information in the Rancher documentation. ### Prometheus Push Gateway diff --git a/versioned_docs/version-2.6/reference-guides/monitoring-v2-configuration/helm-chart-options.md b/versioned_docs/version-2.6/reference-guides/monitoring-v2-configuration/helm-chart-options.md index 112c41c9ea08..42345561b204 100644 --- a/versioned_docs/version-2.6/reference-guides/monitoring-v2-configuration/helm-chart-options.md +++ b/versioned_docs/version-2.6/reference-guides/monitoring-v2-configuration/helm-chart-options.md @@ -49,13 +49,11 @@ An example of where this might be used is with Istio. For more information, see ## Configuring Applications Packaged within Monitoring v2 -We deploy kube-state-metrics and node-exporter with monitoring v2. Node exporter are deployed as DaemonSets. In the monitoring v2 helm chart, in the values.yaml, each of the things are deployed as sub charts. +We deploy kube-state-metrics and node-exporter with monitoring v2. The node exporters are deployed as DaemonSets. Each of these entities are deployed as sub-charts through the monitoring v2 Helm chart, values.yaml. -We also deploy grafana which is not managed by prometheus. +We also deploy Grafana, which is not managed by Prometheus. -If you look at what the helm chart is doing like in kube-state-metrics, there are plenty more values that you can set that aren’t exposed in the top level chart. - -But in the top level chart you can add values that override values that exist in the sub chart. +Many values aren’t exposed in the top level chart. However, you can add values to the top level chart to override values that exist in the sub-charts. ### Increase the Replicas of Alertmanager diff --git a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md index fb4f1cf42bba..e2ac41c4a348 100644 --- a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md +++ b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md @@ -55,7 +55,7 @@ You'll use the backup as a restore point if something goes wrong during upgrade. ### 2. Update the Helm chart repository -1. Update your local helm repo cache. +1. Update your local Helm repo cache: ``` helm repo update diff --git a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md index 122b0686a870..9d4a4c8393e5 100644 --- a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md +++ b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md @@ -20,7 +20,7 @@ sudo ./get_helm.sh ### Install cert-manager -Add the cert-manager helm repository: +Add the cert-manager Helm repository: ``` helm repo add jetstack https://charts.jetstack.io @@ -63,7 +63,7 @@ kubectl rollout status deployment -n cert-manager cert-manager-webhook ### Install Rancher -Next you can install Rancher itself. First add the helm repository: +Next you can install Rancher itself. First, add the Helm repository: ``` helm repo add rancher-latest https://releases.rancher.com/server-charts/latest diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md index 978d15a6ea4e..4b58db9ead31 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md @@ -32,7 +32,7 @@ Since Rancher can be installed on any Kubernetes cluster, you can use this backu ### 1. Install the rancher-backup Helm chart Install the [rancher-backup chart](https://github.com/rancher/backup-restore-operator/tags), using a version in the 2.x.x major version range: - 1. Add the helm repository: + 1. Add the Helm repository: ```bash helm repo add rancher-charts https://charts.rancher.io diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md index 8ff8e4cd5bef..e9021cb37eaa 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md @@ -54,7 +54,7 @@ For details on using Fleet behind a proxy, see [this page.](../../../integration In order for Helm charts with dependencies to deploy successfully, you must run a manual command (as listed below), as it is up to the user to fulfill the dependency list. If you do not do this and proceed to clone your repository and run `helm install`, your installation will fail because the dependencies will be missing. -The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters. +The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` or run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters. ## Troubleshooting diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md index 300e152bb22e..43dfff354280 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md @@ -22,7 +22,7 @@ Native Helm charts include an application along with other software required to ### Rancher Charts -Rancher charts are native helm charts with two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts.](#additional-files-for-rancher-charts) +Rancher charts are native Helm charts with two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts.](#additional-files-for-rancher-charts) Rancher charts add simplified chart descriptions and configuration forms to make the application deployment easy. Rancher users do not need to read through the entire list of Helm variables to understand how to launch an application. diff --git a/versioned_docs/version-2.7/pages-for-subheaders/fleet-gitops-at-scale.md b/versioned_docs/version-2.7/pages-for-subheaders/fleet-gitops-at-scale.md index 54e38cc48eb4..67d7aac57857 100644 --- a/versioned_docs/version-2.7/pages-for-subheaders/fleet-gitops-at-scale.md +++ b/versioned_docs/version-2.7/pages-for-subheaders/fleet-gitops-at-scale.md @@ -55,7 +55,7 @@ For details on using Fleet behind a proxy, see [this page](../integrations-in-ra In order for Helm charts with dependencies to deploy successfully, you must run a manual command (as listed below), as it is up to the user to fulfill the dependency list. If you do not do this and proceed to clone your repository and run `helm install`, your installation will fail because the dependencies will be missing. -The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters +The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` or run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters ## Troubleshooting diff --git a/versioned_docs/version-2.7/pages-for-subheaders/helm-charts-in-rancher.md b/versioned_docs/version-2.7/pages-for-subheaders/helm-charts-in-rancher.md index f0d7ba63fcae..509dc5ea2b97 100644 --- a/versioned_docs/version-2.7/pages-for-subheaders/helm-charts-in-rancher.md +++ b/versioned_docs/version-2.7/pages-for-subheaders/helm-charts-in-rancher.md @@ -74,7 +74,7 @@ Apps managed by the Cluster Manager (the global view in the legacy Rancher UI) s From the left sidebar select _"Repositories"_. -These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. +These items represent Helm repositories, and can be either traditional Helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. To add a private CA for Helm Chart repositories: diff --git a/versioned_docs/version-2.7/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/versioned_docs/version-2.7/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index a6e34b03a505..00f421f760ed 100644 --- a/versioned_docs/version-2.7/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/versioned_docs/version-2.7/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -66,7 +66,7 @@ In general, you want to scrape data from all the workloads running in your clust ### About Prometheus Exporters -A lot of 3rd party workloads like databases, queues or web-servers either already support exposing metrics in a Prometheus format, or there are so called exporters available that translate between the tool's metrics and the format that Prometheus understands. Usually you can add these exporters as additional sidecar containers to the workload's Pods. A lot of helm charts already include options to deploy the correct exporter. Additionally you can find a curated list of exports by SysDig on [promcat.io](https://promcat.io/) and on [ExporterHub](https://exporterhub.io/). +Many 3rd party workloads, such as databases, queues, and web-servers, already support exposing metrics in a Prometheus format, or offer exporters that translate between the tool's metrics and a format that Prometheus understands. You can usually add these exporters as additional sidecar containers to the workload's Pods. Many Helm charts already include options to deploy the correct exporter. You can find a curated list of exports by SysDig on [promcat.io](https://promcat.io/) and on [ExporterHub](https://exporterhub.io/). ### Prometheus support in Programming Languages and Frameworks @@ -74,7 +74,7 @@ To get your own custom application metrics into Prometheus, you have to collect ### ServiceMonitors and PodMonitors -Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. A lot of helm charts already include an option to create these monitors directly. You can also find more information in the Rancher documentation. +Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. Many Helm charts let you create these monitors directly. You can also find more information in the Rancher documentation. ### Prometheus Push Gateway diff --git a/versioned_docs/version-2.7/reference-guides/monitoring-v2-configuration/helm-chart-options.md b/versioned_docs/version-2.7/reference-guides/monitoring-v2-configuration/helm-chart-options.md index 112c41c9ea08..42345561b204 100644 --- a/versioned_docs/version-2.7/reference-guides/monitoring-v2-configuration/helm-chart-options.md +++ b/versioned_docs/version-2.7/reference-guides/monitoring-v2-configuration/helm-chart-options.md @@ -49,13 +49,11 @@ An example of where this might be used is with Istio. For more information, see ## Configuring Applications Packaged within Monitoring v2 -We deploy kube-state-metrics and node-exporter with monitoring v2. Node exporter are deployed as DaemonSets. In the monitoring v2 helm chart, in the values.yaml, each of the things are deployed as sub charts. +We deploy kube-state-metrics and node-exporter with monitoring v2. The node exporters are deployed as DaemonSets. Each of these entities are deployed as sub-charts through the monitoring v2 Helm chart, values.yaml. -We also deploy grafana which is not managed by prometheus. +We also deploy Grafana, which is not managed by Prometheus. -If you look at what the helm chart is doing like in kube-state-metrics, there are plenty more values that you can set that aren’t exposed in the top level chart. - -But in the top level chart you can add values that override values that exist in the sub chart. +Many values aren’t exposed in the top level chart. However, you can add values to the top level chart to override values that exist in the sub-charts. ### Increase the Replicas of Alertmanager diff --git a/versioned_docs/version-2.7/reference-guides/rancher-webhook.md b/versioned_docs/version-2.7/reference-guides/rancher-webhook.md index af8f7f95892a..ca6b3ae3d587 100644 --- a/versioned_docs/version-2.7/reference-guides/rancher-webhook.md +++ b/versioned_docs/version-2.7/reference-guides/rancher-webhook.md @@ -69,7 +69,7 @@ helm upgrade --reuse-values rancher-webhook rancher-charts/rancher-webhook -n c ``` **Note:** This temporary workaround may violate an environment's security policy. This workaround also requires that port 9443 is unused on the host network. -**Note:** Helm uses secrets by default. This is a datatype that some webhook versions validate to store information. In these cases, directly update the deployment with the hostNetwork=true value using kubectl, then run the helm commands listed above to avoid drift between the helm configuration and the actual state in the cluster. +**Note:** Helm uses secrets by default. This is a datatype that some webhook versions validate to store information. In these cases, directly update the deployment with the hostNetwork=true value using kubectl, then run the Helm commands listed above to prevent drift between the Helm configuration and the actual state of the cluster. ### Private GKE Cluster diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md index 7d5f62e329e3..2685ba56f18a 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md @@ -55,7 +55,7 @@ You'll use the backup as a restore point if something goes wrong during upgrade. ### 2. Update the Helm chart repository -1. Update your local helm repo cache. +1. Update your local Helm repo cache: ``` helm repo update diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md index 122b0686a870..9d4a4c8393e5 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher.md @@ -20,7 +20,7 @@ sudo ./get_helm.sh ### Install cert-manager -Add the cert-manager helm repository: +Add the cert-manager Helm repository: ``` helm repo add jetstack https://charts.jetstack.io @@ -63,7 +63,7 @@ kubectl rollout status deployment -n cert-manager cert-manager-webhook ### Install Rancher -Next you can install Rancher itself. First add the helm repository: +Next you can install Rancher itself. First, add the Helm repository: ``` helm repo add rancher-latest https://releases.rancher.com/server-charts/latest diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md index 978d15a6ea4e..4b58db9ead31 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md @@ -32,7 +32,7 @@ Since Rancher can be installed on any Kubernetes cluster, you can use this backu ### 1. Install the rancher-backup Helm chart Install the [rancher-backup chart](https://github.com/rancher/backup-restore-operator/tags), using a version in the 2.x.x major version range: - 1. Add the helm repository: + 1. Add the Helm repository: ```bash helm repo add rancher-charts https://charts.rancher.io diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md index e9d4dec7faa2..2a47a9cdcfb7 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md @@ -54,7 +54,7 @@ For details on using Fleet behind a proxy, see [this page.](../../../integration In order for Helm charts with dependencies to deploy successfully, you must run a manual command (as listed below), as it is up to the user to fulfill the dependency list. If you do not do this and proceed to clone your repository and run `helm install`, your installation will fail because the dependencies will be missing. -The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters. +The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` or run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters. ## Troubleshooting diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md index 300e152bb22e..43dfff354280 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps.md @@ -22,7 +22,7 @@ Native Helm charts include an application along with other software required to ### Rancher Charts -Rancher charts are native helm charts with two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts.](#additional-files-for-rancher-charts) +Rancher charts are native Helm charts with two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts.](#additional-files-for-rancher-charts) Rancher charts add simplified chart descriptions and configuration forms to make the application deployment easy. Rancher users do not need to read through the entire list of Helm variables to understand how to launch an application. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/fleet/overview.md b/versioned_docs/version-2.8/integrations-in-rancher/fleet/overview.md index b7e1806fb586..2c21eaa0ae48 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/fleet/overview.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/fleet/overview.md @@ -51,7 +51,7 @@ For details on using Fleet behind a proxy, see the [Using Fleet Behind a Proxy]( In order for Helm charts with dependencies to deploy successfully, you must run a manual command (as listed below), as it is up to the user to fulfill the dependency list. If you do not do this and proceed to clone your repository and run `helm install`, your installation will fail because the dependencies will be missing. -The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters +The Helm chart in the git repository must include its dependencies in the charts subdirectory. You must either manually run `helm dependencies update $chart` or run `helm dependencies build $chart` locally, then commit the complete charts directory to your git repository. Note that you will update your commands with the applicable parameters ## Troubleshooting diff --git a/versioned_docs/version-2.8/pages-for-subheaders/helm-charts-in-rancher.md b/versioned_docs/version-2.8/pages-for-subheaders/helm-charts-in-rancher.md index f0d7ba63fcae..509dc5ea2b97 100644 --- a/versioned_docs/version-2.8/pages-for-subheaders/helm-charts-in-rancher.md +++ b/versioned_docs/version-2.8/pages-for-subheaders/helm-charts-in-rancher.md @@ -74,7 +74,7 @@ Apps managed by the Cluster Manager (the global view in the legacy Rancher UI) s From the left sidebar select _"Repositories"_. -These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. +These items represent Helm repositories, and can be either traditional Helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. To add a private CA for Helm Chart repositories: diff --git a/versioned_docs/version-2.8/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/versioned_docs/version-2.8/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index a6e34b03a505..00f421f760ed 100644 --- a/versioned_docs/version-2.8/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/versioned_docs/version-2.8/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -66,7 +66,7 @@ In general, you want to scrape data from all the workloads running in your clust ### About Prometheus Exporters -A lot of 3rd party workloads like databases, queues or web-servers either already support exposing metrics in a Prometheus format, or there are so called exporters available that translate between the tool's metrics and the format that Prometheus understands. Usually you can add these exporters as additional sidecar containers to the workload's Pods. A lot of helm charts already include options to deploy the correct exporter. Additionally you can find a curated list of exports by SysDig on [promcat.io](https://promcat.io/) and on [ExporterHub](https://exporterhub.io/). +Many 3rd party workloads, such as databases, queues, and web-servers, already support exposing metrics in a Prometheus format, or offer exporters that translate between the tool's metrics and a format that Prometheus understands. You can usually add these exporters as additional sidecar containers to the workload's Pods. Many Helm charts already include options to deploy the correct exporter. You can find a curated list of exports by SysDig on [promcat.io](https://promcat.io/) and on [ExporterHub](https://exporterhub.io/). ### Prometheus support in Programming Languages and Frameworks @@ -74,7 +74,7 @@ To get your own custom application metrics into Prometheus, you have to collect ### ServiceMonitors and PodMonitors -Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. A lot of helm charts already include an option to create these monitors directly. You can also find more information in the Rancher documentation. +Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. Many Helm charts let you create these monitors directly. You can also find more information in the Rancher documentation. ### Prometheus Push Gateway diff --git a/versioned_docs/version-2.8/reference-guides/monitoring-v2-configuration/helm-chart-options.md b/versioned_docs/version-2.8/reference-guides/monitoring-v2-configuration/helm-chart-options.md index 112c41c9ea08..42345561b204 100644 --- a/versioned_docs/version-2.8/reference-guides/monitoring-v2-configuration/helm-chart-options.md +++ b/versioned_docs/version-2.8/reference-guides/monitoring-v2-configuration/helm-chart-options.md @@ -49,13 +49,11 @@ An example of where this might be used is with Istio. For more information, see ## Configuring Applications Packaged within Monitoring v2 -We deploy kube-state-metrics and node-exporter with monitoring v2. Node exporter are deployed as DaemonSets. In the monitoring v2 helm chart, in the values.yaml, each of the things are deployed as sub charts. +We deploy kube-state-metrics and node-exporter with monitoring v2. The node exporters are deployed as DaemonSets. Each of these entities are deployed as sub-charts through the monitoring v2 Helm chart, values.yaml. -We also deploy grafana which is not managed by prometheus. +We also deploy Grafana, which is not managed by Prometheus. -If you look at what the helm chart is doing like in kube-state-metrics, there are plenty more values that you can set that aren’t exposed in the top level chart. - -But in the top level chart you can add values that override values that exist in the sub chart. +Many values aren’t exposed in the top level chart. However, you can add values to the top level chart to override values that exist in the sub-charts. ### Increase the Replicas of Alertmanager diff --git a/versioned_docs/version-2.8/reference-guides/rancher-webhook.md b/versioned_docs/version-2.8/reference-guides/rancher-webhook.md index afcf518b3009..0060448992de 100644 --- a/versioned_docs/version-2.8/reference-guides/rancher-webhook.md +++ b/versioned_docs/version-2.8/reference-guides/rancher-webhook.md @@ -60,7 +60,7 @@ helm upgrade --reuse-values rancher-webhook rancher-charts/rancher-webhook -n c ``` **Note:** This temporary workaround may violate an environment's security policy. This workaround also requires that port 9443 is unused on the host network. -**Note:** Helm uses secrets by default. This is a datatype that some webhook versions validate to store information. In these cases, directly update the deployment with the hostNetwork=true value using kubectl, then run the helm commands listed above to avoid drift between the helm configuration and the actual state in the cluster. +**Note:** Helm uses secrets by default. This is a datatype that some webhook versions validate to store information. In these cases, directly update the deployment with the hostNetwork=true value using kubectl, then run the Helm commands listed above to prevent drift between the Helm configuration and the actual state of the cluster. ### Private GKE Cluster From 4c0aa9d0089d032d97b0b9a3134cb1f4906e211a Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 6 Dec 2023 13:19:15 -0800 Subject: [PATCH 56/65] Remove preview label from 2.8 --- docusaurus.config.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docusaurus.config.js b/docusaurus.config.js index 91447e069503..e1d583cafa31 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -185,7 +185,7 @@ module.exports = { label: 'Latest', }, 2.8: { - label: 'v2.8 (Preview)', + label: 'v2.8', path: 'v2.8', banner: 'unreleased' }, From 7b30e7ecb7ede0449415b7ebc361fabe390cb34c Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 6 Dec 2023 15:05:19 -0800 Subject: [PATCH 57/65] Remove unreleased banner --- docusaurus.config.js | 1 - 1 file changed, 1 deletion(-) diff --git a/docusaurus.config.js b/docusaurus.config.js index e1d583cafa31..c99708cecfd2 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -187,7 +187,6 @@ module.exports = { 2.8: { label: 'v2.8', path: 'v2.8', - banner: 'unreleased' }, 2.7: { label: 'v2.7', From e479cc33e34529cf0b99d2e36bdc0b54a5404a52 Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Wed, 6 Dec 2023 18:35:48 -0500 Subject: [PATCH 58/65] rm 'add additional' phrase from docs (#1020) * rm 'add additional' phrase from docs * these > more --- docs/faq/technical-items.md | 4 ++-- .../installation-references/helm-chart-options.md | 4 ++-- .../rancher-behind-an-http-proxy/install-kubernetes.md | 2 +- .../istio-setup-guide/enable-istio-in-cluster.md | 2 +- .../ingress-configuration.md | 2 +- .../vsphere/create-a-vm-template.md | 2 +- .../selectors-and-scrape-configurations.md | 2 +- .../custom-resource-configuration/flows-and-clusterflows.md | 2 +- .../monitoring-and-alerting/how-monitoring-works.md | 2 +- docs/pages-for-subheaders/configuration-options.md | 2 +- docs/pages-for-subheaders/use-windows-clusters.md | 2 +- .../rancher-managed-clusters/monitoring-best-practices.md | 2 +- .../rancher-server-configuration/k3s-cluster-configuration.md | 2 +- .../rke2-cluster-configuration.md | 2 +- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- .../integrations-in-rancher/cluster-logging/fluentd.md | 2 +- versioned_docs/version-2.0-2.4/faq/technical-items.md | 4 ++-- .../advanced-use-cases/helm2/helm-rancher/chart-options.md | 4 ++-- .../new-user-guides/helm-charts-in-rancher/globaldns.md | 4 ++-- .../create-an-amazon-ec2-cluster.md | 2 +- .../use-windows-clusters/v2.1-v2.2.md | 2 +- .../load-balancer-and-ingress-controller/add-ingresses.md | 2 +- .../pages-for-subheaders/use-windows-clusters.md | 2 +- .../installation-references/helm-chart-options.md | 4 ++-- .../reference-guides/pipelines/pipeline-configuration.md | 2 +- .../selectors-and-scrape-configurations.md | 2 +- .../custom-resource-configuration/flows-and-clusterflows.md | 4 ++-- .../monitoring-and-alerting/how-monitoring-works.md | 2 +- versioned_docs/version-2.5/faq/technical-items.md | 4 ++-- .../istio-setup-guide/enable-istio-in-cluster.md | 2 +- .../load-balancer-and-ingress-controller/add-ingresses.md | 2 +- .../version-2.5/pages-for-subheaders/configuration-options.md | 2 +- .../version-2.5/pages-for-subheaders/use-windows-clusters.md | 2 +- .../rancher-managed-clusters/monitoring-best-practices.md | 2 +- .../rancherd-configuration-reference.md | 2 +- .../installation-references/helm-chart-options.md | 4 ++-- versioned_docs/version-2.6/faq/technical-items.md | 4 ++-- .../installation-references/helm-chart-options.md | 4 ++-- .../rancher-behind-an-http-proxy/install-kubernetes.md | 2 +- .../istio-setup-guide/enable-istio-in-cluster.md | 4 ++-- .../ingress-configuration.md | 2 +- .../vsphere/create-a-vm-template.md | 2 +- .../selectors-and-scrape-configurations.md | 2 +- .../custom-resource-configuration/flows-and-clusterflows.md | 2 +- .../monitoring-and-alerting/how-monitoring-works.md | 2 +- .../version-2.6/pages-for-subheaders/configuration-options.md | 2 +- .../version-2.6/pages-for-subheaders/use-windows-clusters.md | 2 +- .../rancher-managed-clusters/monitoring-best-practices.md | 2 +- .../rancher-server-configuration/k3s-cluster-configuration.md | 2 +- .../rke2-cluster-configuration.md | 2 +- .../reference-guides/pipelines/pipeline-configuration.md | 2 +- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- versioned_docs/version-2.7/faq/technical-items.md | 4 ++-- .../installation-references/helm-chart-options.md | 4 ++-- .../rancher-behind-an-http-proxy/install-kubernetes.md | 2 +- .../istio-setup-guide/enable-istio-in-cluster.md | 2 +- .../ingress-configuration.md | 2 +- .../vsphere/create-a-vm-template.md | 2 +- .../selectors-and-scrape-configurations.md | 2 +- .../custom-resource-configuration/flows-and-clusterflows.md | 2 +- .../monitoring-and-alerting/how-monitoring-works.md | 2 +- .../version-2.7/pages-for-subheaders/configuration-options.md | 2 +- .../version-2.7/pages-for-subheaders/use-windows-clusters.md | 2 +- .../rancher-managed-clusters/monitoring-best-practices.md | 2 +- .../rancher-server-configuration/k3s-cluster-configuration.md | 2 +- .../rke2-cluster-configuration.md | 2 +- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- versioned_docs/version-2.8/faq/technical-items.md | 4 ++-- .../installation-references/helm-chart-options.md | 4 ++-- .../rancher-behind-an-http-proxy/install-kubernetes.md | 2 +- .../istio-setup-guide/enable-istio-in-cluster.md | 2 +- .../ingress-configuration.md | 2 +- .../vsphere/create-a-vm-template.md | 2 +- .../selectors-and-scrape-configurations.md | 2 +- .../custom-resource-configuration/flows-and-clusterflows.md | 2 +- .../monitoring-and-alerting/how-monitoring-works.md | 2 +- .../version-2.8/pages-for-subheaders/configuration-options.md | 2 +- .../version-2.8/pages-for-subheaders/use-windows-clusters.md | 2 +- .../rancher-managed-clusters/monitoring-best-practices.md | 2 +- .../rancher-server-configuration/k3s-cluster-configuration.md | 2 +- .../rke2-cluster-configuration.md | 2 +- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- 82 files changed, 98 insertions(+), 98 deletions(-) diff --git a/docs/faq/technical-items.md b/docs/faq/technical-items.md index 0aed55b5d0db..8437ee3995ca 100644 --- a/docs/faq/technical-items.md +++ b/docs/faq/technical-items.md @@ -93,9 +93,9 @@ When the IP address of the node changed, Rancher lost connection to the node, so When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster. -### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? +### How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? -You can add additional arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rke-cluster-config-file-reference) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). +You can add more arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rke-cluster-config-file-reference) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). ### How do I check if my certificate chain is valid? diff --git a/docs/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md b/docs/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md index 621119862c80..340f33603d10 100644 --- a/docs/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md +++ b/docs/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md @@ -47,7 +47,7 @@ For information on enabling experimental features, refer to [this page.](../../. | `debug` | false | `bool` - set debug flag on rancher server | | `extraEnv` | [] | `list` - set additional environment variables for Rancher | | `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials | -| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. | +| `ingress.configurationSnippet` | "" | `string` - additional Nginx configuration. Can be used for proxy configuration. | | `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress | | `ingress.enabled` | true | When set to false, Helm will not install a Rancher ingress. Set the option to false to deploy your own ingress. | | `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. | | @@ -192,7 +192,7 @@ To learn more about how to configure environment variables, refer to [Define Env ### Additional Trusted CAs -If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add additional trusted CAs to Rancher. +If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add more trusted CAs to Rancher. ```plain --set additionalTrustedCAs=true diff --git a/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md b/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md index baee092d0aee..9f3654619d37 100644 --- a/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md +++ b/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md @@ -145,7 +145,7 @@ sudo systemctl restart docker You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections. -In addition to setting the default rules for a proxy server, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment. +In addition to setting the default rules for a proxy server, you must also add the rules shown below to provision node driver clusters from a proxied Rancher environment. You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`: diff --git a/docs/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md b/docs/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md index 09624abc1bc0..3ca93936f1f2 100644 --- a/docs/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md +++ b/docs/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md @@ -23,7 +23,7 @@ title: 1. Enable Istio in the Cluster 1. If you have not already installed your own monitoring app, you will be prompted to install the rancher-monitoring app. Optional: Set your Selector or Scrape config options on rancher-monitoring app install. 1. Optional: Configure member access and [resource limits](../../../integrations-in-rancher/istio/cpu-and-memory-allocations.md) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio. 1. Optional: Make additional configuration changes to values.yaml if needed. -1. Optional: Add additional resources or configuration via the [overlay file.](../../../pages-for-subheaders/configuration-options.md#overlay-file) +1. Optional: Add further resources or configuration via the [overlay file](../../../pages-for-subheaders/configuration-options.md#overlay-file). 1. Click **Install**. **Result:** Istio is installed at the cluster level. diff --git a/docs/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md b/docs/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md index 11bdcffc4c6f..4202912594f0 100644 --- a/docs/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md +++ b/docs/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md @@ -36,7 +36,7 @@ You must have an SSL certificate that Ingress can use to encrypt and decrypt com 1. Click **Add Certificate**. 1. Select a **Certificate - Secret Name** from the drop-down list. 1. Enter the host using encrypted communication. -1. To add additional hosts that use the certificate, click **Add Hosts**. +1. To add more hosts that use the same certificate, click **Add Hosts**. ## Labels and Annotations diff --git a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index c8a6da86cf2f..e2974c08d3f0 100644 --- a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -13,7 +13,7 @@ In order to leverage the template to create new VMs, Rancher has some [specific ## Requirements -There is specific tooling required for both Linux and Windows VMs to be usable by the vSphere node driver. The most critical dependency is [cloud-init](https://cloud-init.io/) for Linux and [cloudbase-init](https://cloudbase.it/cloudbase-init/) for Windows. Both of these are used for provisioning the VMs by configuring the hostname and by setting up the SSH access and the default Rancher user. Users can add additional content to these as desired if other configuration is needed. In addition, other requirements are listed below for reference. +There is specific tooling required for both Linux and Windows VMs to be usable by the vSphere node driver. The most critical dependency is [cloud-init](https://cloud-init.io/) for Linux and [cloudbase-init](https://cloudbase.it/cloudbase-init/) for Windows. Both of these are used for provisioning the VMs by configuring the hostname and by setting up the SSH access and the default Rancher user. Users can add more content to these as desired if other configuration is needed. In addition, other requirements are listed below for reference. :::note diff --git a/docs/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md b/docs/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md index 29b51149c972..ede0c2af94e7 100644 --- a/docs/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md +++ b/docs/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md @@ -10,7 +10,7 @@ The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=fals This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. -If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you must perform some additional configuration to continue to monitor your resources. ### Limiting Monitoring to Specific Namespaces by Setting ignoreNamespaceSelectors to True diff --git a/docs/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md b/docs/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md index d6d2ccd67e27..65707e0980ae 100644 --- a/docs/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md +++ b/docs/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md @@ -32,7 +32,7 @@ For detailed examples on using the match statement, see the [official documentat ### Filters -You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the `Flow` are applied in the order in the definition. +You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, such as adding data, transforming the logs, or parsing values from the records. The filters in the `Flow` are applied in the same order they appear in the definition. For a list of filters supported by the Logging operator, see [the official documentation on Fluentd filters](https://kube-logging.github.io/docs/configuration/plugins/filters/). diff --git a/docs/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md b/docs/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md index bea67b1dc8f3..8f14e7faa069 100644 --- a/docs/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md +++ b/docs/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md @@ -88,7 +88,7 @@ A PrometheusRule allows you to define one or more RuleGroups. Each RuleGroup con - Labels that should be attached to the alert or record that identify it (e.g. cluster name or severity) - Annotations that encode any additional important pieces of information that need to be displayed on the notification for an alert (e.g. summary, description, message, runbook URL, etc.). This field is not required for recording rules. -On evaluating a [rule](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#rule), Prometheus will execute the provided PromQL query, add additional provided labels (or annotations - only for alerting rules), and execute the appropriate action for the rule. For example, an Alerting Rule that adds `team: front-end` as a label to the provided PromQL query will append that label to the fired alert, which will allow Alertmanager to forward the alert to the correct Receiver. +Upon evaluating a [rule](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#rule), Prometheus runs the provided PromQL query, adds the provided labels, and runs the appropriate action for the rule. If the rule triggers an alert, Prometheus also adds the provided annotations. For example, an Alerting Rule that adds `team: front-end` as a label to the provided PromQL query will append that label to the fired alert, which will allow Alertmanager to forward the alert to the correct Receiver. ### Alerting and Recording Rules diff --git a/docs/pages-for-subheaders/configuration-options.md b/docs/pages-for-subheaders/configuration-options.md index 92a375e948db..fdfc51d41bcb 100644 --- a/docs/pages-for-subheaders/configuration-options.md +++ b/docs/pages-for-subheaders/configuration-options.md @@ -26,7 +26,7 @@ For more information on Overlay Files, refer to the [Istio documentation.](https The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false` which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. -If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you must perform some additional configuration to continue to monitor your resources. For details, refer to [this section.](../integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md) diff --git a/docs/pages-for-subheaders/use-windows-clusters.md b/docs/pages-for-subheaders/use-windows-clusters.md index 36fe47c422f9..2e76e0ca874e 100644 --- a/docs/pages-for-subheaders/use-windows-clusters.md +++ b/docs/pages-for-subheaders/use-windows-clusters.md @@ -120,7 +120,7 @@ The `worker` nodes, which is where your workloads will be deployed on, will typi #### Recommended Architecture -We recommend the minimum three-node architecture listed in the table below, but you can always add additional Linux and Windows workers to scale up your cluster for redundancy: +We recommend the minimum three-node architecture listed in the table below, but you can always add more Linux and Windows workers to scale up your cluster for redundancy: | Node | Operating System | Kubernetes Cluster Role(s) | Purpose | | ------ | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | diff --git a/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index 19609e66a854..3fd02858c9af 100644 --- a/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -74,7 +74,7 @@ To get your own custom application metrics into Prometheus, you have to collect ### ServiceMonitors and PodMonitors -Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. Many Helm charts let you to create these monitors directly. You can also find more information in the Rancher documentation. +Once all of your workloads expose metrics in a Prometheus format, you must configure Prometheus to scrape them. Under the hood, Rancher uses the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add scraping targets with ServiceMonitors and PodMonitors. Many Helm charts already include an option to create these monitors directly. You can also find more information in the Rancher documentation. ### Prometheus Push Gateway diff --git a/docs/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md b/docs/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md index 906fac616e67..76f081eb0530 100644 --- a/docs/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md +++ b/docs/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md @@ -176,7 +176,7 @@ Truncating hostnames in a cluster improves compatibility with Windows-based syst ##### TLS Alternate Names -Add additional hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. +Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. ##### Authorized Cluster Endpoint diff --git a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md index 27720506b136..5c48ec35e235 100644 --- a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md +++ b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md @@ -225,7 +225,7 @@ Truncating hostnames in a cluster improves compatibility with Windows-based syst ##### TLS Alternate Names -Add additional hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. +Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. ##### Authorized Cluster Endpoint diff --git a/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 3bdfbf449d89..bc81ed966c2b 100644 --- a/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -52,7 +52,7 @@ Privileged access is [required.](../../pages-for-subheaders/rancher-on-a-single- You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections. -In addition to setting the default rules for a proxy server as shown above, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment. +In addition to setting the default rules for a proxy server as shown above, you must also add the following rules to provision node driver clusters from a proxied Rancher environment: You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`: diff --git a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cluster-logging/fluentd.md b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cluster-logging/fluentd.md index a5a73629d8a7..ed8f1ef31a13 100644 --- a/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cluster-logging/fluentd.md +++ b/versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/cluster-logging/fluentd.md @@ -10,7 +10,7 @@ If your organization uses [Fluentd](https://www.fluentd.org/), you can configure ## Fluentd Configuration -You can add multiple Fluentd Servers. If you want to add additional Fluentd servers, click **Add Fluentd Server**. For each Fluentd server, complete the configuration information: +You can add multiple Fluentd Servers. First, click **Add Fluentd Server**. For each Fluentd server, complete the configuration information: 1. In the **Endpoint** field, enter the address and port of your Fluentd instance, e.g. `http://Fluentd-server:24224`. diff --git a/versioned_docs/version-2.0-2.4/faq/technical-items.md b/versioned_docs/version-2.0-2.4/faq/technical-items.md index e939f99cc288..8047541db255 100644 --- a/versioned_docs/version-2.0-2.4/faq/technical-items.md +++ b/versioned_docs/version-2.0-2.4/faq/technical-items.md @@ -116,9 +116,9 @@ When the IP address of the node changed, Rancher lost connection to the node, so When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster. -### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? +### How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? -You can add additional arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#cluster-config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). +You can add more arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#cluster-config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). ### How do I check if my certificate chain is valid? diff --git a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/helm-rancher/chart-options.md b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/helm-rancher/chart-options.md index 9e976ffe7650..42a9a0468a8a 100644 --- a/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/helm-rancher/chart-options.md +++ b/versioned_docs/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/helm-rancher/chart-options.md @@ -32,7 +32,7 @@ title: Chart Options | `extraEnv` | [] | `list` - set additional environment variables for Rancher _Note: Available as of v2.2.0_ | | `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials | | `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress | -| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. _Note: Available as of v2.0.15, v2.1.10 and v2.2.4_ | +| `ingress.configurationSnippet` | "" | `string` - additional Nginx configuration. Can be used for proxy configuration. _Note: Available as of v2.0.15, v2.1.10 and v2.2.4_ | | `proxy` | "" | `string` - HTTP[S] proxy server for Rancher | | `noProxy` | "127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16" | `string` - comma separated list of hostnames or ip address not to use the proxy | | `resources` | {} | `map` - rancher pod resource requests & limits | @@ -125,7 +125,7 @@ Add your IP exceptions to the `noProxy` list. Make sure you add the Service clus ### Additional Trusted CAs -If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add additional trusted CAs to Rancher. +If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add more trusted CAs to Rancher. ```plain --set additionalTrustedCAs=true diff --git a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md index 8aace5af93a2..5edbf4d38304 100644 --- a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md +++ b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md @@ -39,13 +39,13 @@ For each application that you want to route traffic to, you will need to create ## Permissions for Global DNS Providers and Entries -By default, only [global administrators](../../advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md) and the creator of the Global DNS provider or Global DNS entry have access to use, edit and delete them. When creating the provider or entry, the creator can add additional users in order for those users to access and manage them. By default, these members will get `Owner` role to manage them. +By default, only [global administrators](../../advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md) and the creator of the Global DNS provider or Global DNS entry have access to use, edit and delete them. When creating the provider or entry, the creator can add more users in order for those users to access and manage them. By default, these members will get `Owner` role to manage them. ## Setting up Global DNS for Applications 1. From the **Global View**, select **Tools > Global DNS Providers**. 1. To add a provider, choose from the available provider options and configure the Global DNS Provider with necessary credentials and an optional domain. For help, see [DNS Provider Configuration.](#dns-provider-configuration) -1. (Optional) Add additional users so they could use the provider when creating Global DNS entries as well as manage the Global DNS provider. +1. (Optional) Add more users so they can also use the provider when creating Global DNS entries and manage the Global DNS provider. 1. (Optional) Pass any custom values in the Additional Options section. ## Adding a Global DNS Entry diff --git a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md index 5d509e46c84e..da9bd307fd85 100644 --- a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md +++ b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md @@ -88,7 +88,7 @@ You can access your cluster after its state is updated to **Active.** 1. Use **Cluster Options** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.** Refer to [Selecting Cloud Providers](../../../../../pages-for-subheaders/set-up-cloud-providers.md) to configure the Kubernetes Cloud Provider. For help configuring the cluster, refer to the [RKE cluster configuration reference.](../../../../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md) 1. Add one or more node pools to your cluster. Each node pool uses a node template to provision new nodes. For more information about node pools, including best practices for assigning Kubernetes roles to them, see [this section.](../../../../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md) To create a node template, click **Add Node Template**. For help filling out the node template, refer to [EC2 Node Template Configuration.](../../../../../reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md) 1. Click **Create**. -1. **Optional:** Add additional node pools. +1. **Optional:** Add more node pools. 1. Review your cluster settings to confirm they are correct. Then click **Create**. **Result:** diff --git a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/v2.1-v2.2.md b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/v2.1-v2.2.md index a9a0f6808779..21e4dcbeb2bb 100644 --- a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/v2.1-v2.2.md +++ b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/v2.1-v2.2.md @@ -50,7 +50,7 @@ Node 3 | Windows (Windows Server core version 1809 or above) | Worker - You can view node requirements for Linux and Windows nodes in the [installation section](../../../../../pages-for-subheaders/installation-requirements.md). - All nodes in a virtualization cluster or a bare metal cluster must be connected using a layer 2 network. - To support [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/), your cluster must include at least one Linux node dedicated to the worker role. -- Although we recommend the three node architecture listed in the table above, you can add additional Linux and Windows workers to scale up your cluster for redundancy. +- Although we recommend the three node architecture listed in the table above, You can add more Linux and Windows workers to scale up your cluster for redundancy. ## 2. Cloud-hosted VM Networking Configuration diff --git a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/add-ingresses.md b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/add-ingresses.md index 7d3932488107..2201f73988dc 100644 --- a/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/add-ingresses.md +++ b/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/add-ingresses.md @@ -63,7 +63,7 @@ Use this option to set an ingress rule for handling requests that don't match an 1. Click **Add Certificate**. 1. Select a **Certificate** from the drop-down list. 1. Enter the **Host** using encrypted communication. -1. To add additional hosts that use the certificate, click **Add Hosts**. +1. To add more hosts that use the same certificate, click **Add Hosts**. ### Labels and Annotations diff --git a/versioned_docs/version-2.0-2.4/pages-for-subheaders/use-windows-clusters.md b/versioned_docs/version-2.0-2.4/pages-for-subheaders/use-windows-clusters.md index 989f739c25c9..0d4a5c9222bb 100644 --- a/versioned_docs/version-2.0-2.4/pages-for-subheaders/use-windows-clusters.md +++ b/versioned_docs/version-2.0-2.4/pages-for-subheaders/use-windows-clusters.md @@ -78,7 +78,7 @@ The `worker` nodes, which is where your workloads will be deployed on, will typi #### Recommended Architecture -We recommend the minimum three-node architecture listed in the table below, but you can always add additional Linux and Windows workers to scale up your cluster for redundancy: +We recommend the minimum three-node architecture listed in the table below, but you can always add more Linux and Windows workers to scale up your cluster for redundancy: | Node | Operating System | Kubernetes Cluster Role(s) | Purpose | | ------ | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | diff --git a/versioned_docs/version-2.0-2.4/reference-guides/installation-references/helm-chart-options.md b/versioned_docs/version-2.0-2.4/reference-guides/installation-references/helm-chart-options.md index 89c26641353b..bd59b818e08f 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/installation-references/helm-chart-options.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/installation-references/helm-chart-options.md @@ -44,7 +44,7 @@ For information on enabling experimental features, refer to [this page.](../../p | `debug` | false | `bool` - set debug flag on rancher server | | `extraEnv` | [] | `list` - set additional environment variables for Rancher _Note: Available as of v2.2.0_ | | `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials | -| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. _Note: Available as of v2.0.15, v2.1.10 and v2.2.4_ | +| `ingress.configurationSnippet` | "" | `string` - additional Nginx configuration. Can be used for proxy configuration. _Note: Available as of v2.0.15, v2.1.10 and v2.2.4_ | | `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress | | `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. | | | `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local,cattle-system.svc" | `string` - comma separated list of hostnames or ip address not to use the proxy | | @@ -137,7 +137,7 @@ Add your IP exceptions to the `noProxy` list. Make sure you add the Pod cluster ### Additional Trusted CAs -If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add additional trusted CAs to Rancher. +If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add more trusted CAs to Rancher. ```plain --set additionalTrustedCAs=true diff --git a/versioned_docs/version-2.0-2.4/reference-guides/pipelines/pipeline-configuration.md b/versioned_docs/version-2.0-2.4/reference-guides/pipelines/pipeline-configuration.md index 08a6a8d41043..adbf3e209915 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/pipelines/pipeline-configuration.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/pipelines/pipeline-configuration.md @@ -302,7 +302,7 @@ _Available as of v2.2.0_ > **Note:** Notifiers are configured at a cluster level and require a different level of permissions. -1. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for _Slack_, you can update which channel to send the notification to. You can add additional notifiers by clicking **Add Recipient**. +1. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for _Slack_, you can update which channel to send the notification to. You can add more notifiers by clicking **Add Recipient**. ### Configuring Notifications by YAML _Available as of v2.2.0_ diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md index 449d1a336dda..fbcb6bc1ac20 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md @@ -10,7 +10,7 @@ The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=fals This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. -If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you must perform some additional configuration to continue to monitor your resources. ### Limiting Monitoring to Specific Namespaces by Setting ignoreNamespaceSelectors to True diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md index 446662f54212..b8ff93429c07 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md @@ -67,7 +67,7 @@ For detailed examples on using the match statement, see the [official documentat -You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the `Flow` are applied in the order in the definition. +You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, such as adding data, transforming the logs, or parsing values from the records. The filters in the `Flow` are applied in the same order they appear in the definition. For a list of filters supported by the Logging operator, see [the official documentation on Fluentd filters](https://kube-logging.github.io/docs/configuration/plugins/filters/). @@ -78,7 +78,7 @@ Filters need to be configured in YAML. ### Filters -You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the `Flow` are applied in the order in the definition. +You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, such as adding data, transforming the logs, or parsing values from the records. The filters in the `Flow` are applied in the same order they appear in the definition. For a list of filters supported by the Logging operator, see [the official documentation on Fluentd filters](https://kube-logging.github.io/docs/configuration/plugins/filters/). diff --git a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md index e87edfc7a982..152275652aca 100644 --- a/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md +++ b/versioned_docs/version-2.5/explanations/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md @@ -88,7 +88,7 @@ A PrometheusRule allows you to define one or more RuleGroups. Each RuleGroup con - Labels that should be attached to the alert or record that identify it (e.g. cluster name or severity) - Annotations that encode any additional important pieces of information that need to be displayed on the notification for an alert (e.g. summary, description, message, runbook URL, etc.). This field is not required for recording rules. -On evaluating a [rule](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#rule), Prometheus will execute the provided PromQL query, add additional provided labels (or annotations - only for alerting rules), and execute the appropriate action for the rule. For example, an Alerting Rule that adds `team: front-end` as a label to the provided PromQL query will append that label to the fired alert, which will allow Alertmanager to forward the alert to the correct Receiver. +Upon evaluating a [rule](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#rule), Prometheus runs the provided PromQL query, adds the provided labels, and runs the appropriate action for the rule. If the rule triggers an alert, Prometheus also adds the provided annotations. For example, an Alerting Rule that adds `team: front-end` as a label to the provided PromQL query will append that label to the fired alert, which will allow Alertmanager to forward the alert to the correct Receiver. ### Alerting and Recording Rules diff --git a/versioned_docs/version-2.5/faq/technical-items.md b/versioned_docs/version-2.5/faq/technical-items.md index 7d3491a6c6f5..e6b3853a6e3c 100644 --- a/versioned_docs/version-2.5/faq/technical-items.md +++ b/versioned_docs/version-2.5/faq/technical-items.md @@ -93,9 +93,9 @@ When the IP address of the node changed, Rancher lost connection to the node, so When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster. -### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? +### How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? -You can add additional arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#editing-clusters-with-yaml) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). +You can add more arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#editing-clusters-with-yaml) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). ### How do I check if my certificate chain is valid? diff --git a/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md b/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md index a6c4d1091219..76bdcc78c48b 100644 --- a/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md +++ b/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md @@ -18,7 +18,7 @@ title: 1. Enable Istio in the Cluster 1. If you have not already installed your own monitoring app, you will be prompted to install the rancher-monitoring app. Optional: Set your Selector or Scrape config options on rancher-monitoring app install. 1. Optional: Configure member access and [resource limits](../../../explanations/integrations-in-rancher/istio/cpu-and-memory-allocations.md) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio. 1. Optional: Make additional configuration changes to values.yaml if needed. -1. Optional: Add additional resources or configuration via the [overlay file.](../../../pages-for-subheaders/configuration-options.md#overlay-file) +1. Optional: Add further resources or configuration via the [overlay file](../../../pages-for-subheaders/configuration-options.md#overlay-file). 1. Click **Install**. **Result:** Istio is installed at the cluster level. diff --git a/versioned_docs/version-2.5/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/add-ingresses.md b/versioned_docs/version-2.5/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/add-ingresses.md index 5769a3754d6d..aafa01b0690c 100644 --- a/versioned_docs/version-2.5/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/add-ingresses.md +++ b/versioned_docs/version-2.5/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/add-ingresses.md @@ -63,7 +63,7 @@ Use this option to set an ingress rule for handling requests that don't match an 1. Click **Add Certificate**. 1. Select a **Certificate** from the drop-down list. 1. Enter the **Host** using encrypted communication. -1. To add additional hosts that use the certificate, click **Add Hosts**. +1. To add more hosts that use the same certificate, click **Add Hosts**. ### Labels and Annotations diff --git a/versioned_docs/version-2.5/pages-for-subheaders/configuration-options.md b/versioned_docs/version-2.5/pages-for-subheaders/configuration-options.md index be57a9921978..da9820b5f111 100644 --- a/versioned_docs/version-2.5/pages-for-subheaders/configuration-options.md +++ b/versioned_docs/version-2.5/pages-for-subheaders/configuration-options.md @@ -26,7 +26,7 @@ For more information on Overlay Files, refer to the [Istio documentation.](https The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false` which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. -If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you must perform some additional configuration to continue to monitor your resources. For details, refer to [this section.](../explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md) diff --git a/versioned_docs/version-2.5/pages-for-subheaders/use-windows-clusters.md b/versioned_docs/version-2.5/pages-for-subheaders/use-windows-clusters.md index 337756ff27fe..6bd6e6b61fc5 100644 --- a/versioned_docs/version-2.5/pages-for-subheaders/use-windows-clusters.md +++ b/versioned_docs/version-2.5/pages-for-subheaders/use-windows-clusters.md @@ -116,7 +116,7 @@ Clusters won't begin provisioning until all three node roles (worker, etcd and c #### Recommended Architecture -We recommend the minimum three-node architecture listed in the table below, but you can always add additional Linux and Windows workers to scale up your cluster for redundancy: +We recommend the minimum three-node architecture listed in the table below, but you can always add more Linux and Windows workers to scale up your cluster for redundancy: | Node | Operating System | Kubernetes Cluster Role(s) | Purpose | | ------ | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | diff --git a/versioned_docs/version-2.5/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/versioned_docs/version-2.5/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index 38750c4cb3ff..2ad299f1210f 100644 --- a/versioned_docs/version-2.5/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/versioned_docs/version-2.5/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -75,7 +75,7 @@ To get your own custom application metrics into Prometheus, you have to collect ### ServiceMonitors and PodMonitors -Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. Many Helm charts let you create these monitors directly. You can also find more information in the Rancher documentation. +Once all of your workloads expose metrics in a Prometheus format, you must configure Prometheus to scrape them. Under the hood, Rancher uses the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add scraping targets with ServiceMonitors and PodMonitors. Many Helm charts already include an option to create these monitors directly. You can also find more information in the Rancher documentation. ### Prometheus Push Gateway diff --git a/versioned_docs/version-2.5/reference-guides/cluster-configuration/rancher-server-configuration/rancherd-configuration-reference.md b/versioned_docs/version-2.5/reference-guides/cluster-configuration/rancher-server-configuration/rancherd-configuration-reference.md index 4ec8aa9de535..bfe1bba561fd 100644 --- a/versioned_docs/version-2.5/reference-guides/cluster-configuration/rancher-server-configuration/rancherd-configuration-reference.md +++ b/versioned_docs/version-2.5/reference-guides/cluster-configuration/rancher-server-configuration/rancherd-configuration-reference.md @@ -112,7 +112,7 @@ It can be run with the following options: |--------|-------------| | `--bind-address value` | RancherD bind address (default: 0.0.0.0) | | `--advertise-address value` | IP address that apiserver uses to advertise to members of the cluster (default: node-external-ip/node-ip) | -| `--tls-san value` | Add additional hostname or IP as a Subject Alternative Name in the TLS cert | +| `--tls-san value` | Add hostname or IP as a Subject Alternative Name in the TLS cert | ### Data diff --git a/versioned_docs/version-2.5/reference-guides/installation-references/helm-chart-options.md b/versioned_docs/version-2.5/reference-guides/installation-references/helm-chart-options.md index 9083e22b93a5..660e4e69e77e 100644 --- a/versioned_docs/version-2.5/reference-guides/installation-references/helm-chart-options.md +++ b/versioned_docs/version-2.5/reference-guides/installation-references/helm-chart-options.md @@ -44,7 +44,7 @@ For information on enabling experimental features, refer to [this page.](../../p | `debug` | false | `bool` - set debug flag on rancher server | | `extraEnv` | [] | `list` - set additional environment variables for Rancher | | `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials | -| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. | +| `ingress.configurationSnippet` | "" | `string` - additional Nginx configuration. Can be used for proxy configuration. | | `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress | | `ingress.enabled` | true | When set to false, Helm will not install a Rancher ingress. Set the option to false to deploy your own ingress. _Available as of v2.5.6_ | | `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. | | @@ -134,7 +134,7 @@ Add your IP exceptions to the `noProxy` list. Make sure you add the Pod cluster ### Additional Trusted CAs -If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add additional trusted CAs to Rancher. +If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add more trusted CAs to Rancher. ```plain --set additionalTrustedCAs=true diff --git a/versioned_docs/version-2.6/faq/technical-items.md b/versioned_docs/version-2.6/faq/technical-items.md index 0aed55b5d0db..8437ee3995ca 100644 --- a/versioned_docs/version-2.6/faq/technical-items.md +++ b/versioned_docs/version-2.6/faq/technical-items.md @@ -93,9 +93,9 @@ When the IP address of the node changed, Rancher lost connection to the node, so When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster. -### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? +### How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? -You can add additional arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rke-cluster-config-file-reference) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). +You can add more arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rke-cluster-config-file-reference) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). ### How do I check if my certificate chain is valid? diff --git a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md index 867fd69d0f96..b9f659aea3e5 100644 --- a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md +++ b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md @@ -44,7 +44,7 @@ For information on enabling experimental features, refer to [this page.](../../. | `debug` | false | `bool` - set debug flag on rancher server | | `extraEnv` | [] | `list` - set additional environment variables for Rancher | | `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials | -| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. | +| `ingress.configurationSnippet` | "" | `string` - additional Nginx configuration. Can be used for proxy configuration. | | `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress | | `ingress.enabled` | true | When set to false, Helm will not install a Rancher ingress. Set the option to false to deploy your own ingress. | | `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. | | @@ -189,7 +189,7 @@ To learn more about how to configure environment variables, refer to [Define Env ### Additional Trusted CAs -If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add additional trusted CAs to Rancher. +If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add more trusted CAs to Rancher. ```plain --set additionalTrustedCAs=true diff --git a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md index 86c9a85988df..b10e02ff22f1 100644 --- a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md +++ b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md @@ -65,7 +65,7 @@ _New in v2.6.4_ You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections. -In addition to setting the default rules for a proxy server, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment. +In addition to setting the default rules for a proxy server, you must also add the rules shown below to provision node driver clusters from a proxied Rancher environment. You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`: diff --git a/versioned_docs/version-2.6/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md b/versioned_docs/version-2.6/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md index 291aa93f1238..bb81cb623f57 100644 --- a/versioned_docs/version-2.6/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md +++ b/versioned_docs/version-2.6/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md @@ -26,7 +26,7 @@ title: 1. Enable Istio in the Cluster 1. If you have not already installed your own monitoring app, you will be prompted to install the rancher-monitoring app. Optional: Set your Selector or Scrape config options on rancher-monitoring app install. 1. Optional: Configure member access and [resource limits](../../../integrations-in-rancher/istio/cpu-and-memory-allocations.md) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio. 1. Optional: Make additional configuration changes to values.yaml if needed. -1. Optional: Add additional resources or configuration via the [overlay file.](../../../pages-for-subheaders/configuration-options.md#overlay-file) +1. Optional: Add further resources or configuration via the [overlay file](../../../pages-for-subheaders/configuration-options.md#overlay-file). 1. Click **Install**. @@ -40,7 +40,7 @@ title: 1. Enable Istio in the Cluster 1. If you have not already installed your own monitoring app, you will be prompted to install the rancher-monitoring app. Optional: Set your Selector or Scrape config options on rancher-monitoring app install. 1. Optional: Configure member access and [resource limits](../../../integrations-in-rancher/istio/cpu-and-memory-allocations.md) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio. 1. Optional: Make additional configuration changes to values.yaml if needed. -1. Optional: Add additional resources or configuration via the [overlay file.](../../../pages-for-subheaders/configuration-options.md#overlay-file) +1. Optional: Add further resources or configuration via the [overlay file](../../../pages-for-subheaders/configuration-options.md#overlay-file). 1. Click **Install**. diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md index 39d87d02b075..fd08960e206e 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md @@ -41,7 +41,7 @@ You must have an SSL certificate that the Ingress controller can use to encrypt/ 1. Click **Add Certificate**. 1. Select a **Certificate - Secret Name** from the drop-down list. 1. Enter the host using encrypted communication. -1. To add additional hosts that use the certificate, click **Add Hosts**. +1. To add more hosts that use the same certificate, click **Add Hosts**. ## Labels and Annotations diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index c8a6da86cf2f..e2974c08d3f0 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -13,7 +13,7 @@ In order to leverage the template to create new VMs, Rancher has some [specific ## Requirements -There is specific tooling required for both Linux and Windows VMs to be usable by the vSphere node driver. The most critical dependency is [cloud-init](https://cloud-init.io/) for Linux and [cloudbase-init](https://cloudbase.it/cloudbase-init/) for Windows. Both of these are used for provisioning the VMs by configuring the hostname and by setting up the SSH access and the default Rancher user. Users can add additional content to these as desired if other configuration is needed. In addition, other requirements are listed below for reference. +There is specific tooling required for both Linux and Windows VMs to be usable by the vSphere node driver. The most critical dependency is [cloud-init](https://cloud-init.io/) for Linux and [cloudbase-init](https://cloudbase.it/cloudbase-init/) for Windows. Both of these are used for provisioning the VMs by configuring the hostname and by setting up the SSH access and the default Rancher user. Users can add more content to these as desired if other configuration is needed. In addition, other requirements are listed below for reference. :::note diff --git a/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md b/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md index 29b51149c972..ede0c2af94e7 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md @@ -10,7 +10,7 @@ The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=fals This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. -If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you must perform some additional configuration to continue to monitor your resources. ### Limiting Monitoring to Specific Namespaces by Setting ignoreNamespaceSelectors to True diff --git a/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md b/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md index d6d2ccd67e27..65707e0980ae 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md @@ -32,7 +32,7 @@ For detailed examples on using the match statement, see the [official documentat ### Filters -You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the `Flow` are applied in the order in the definition. +You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, such as adding data, transforming the logs, or parsing values from the records. The filters in the `Flow` are applied in the same order they appear in the definition. For a list of filters supported by the Logging operator, see [the official documentation on Fluentd filters](https://kube-logging.github.io/docs/configuration/plugins/filters/). diff --git a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md index bea67b1dc8f3..8f14e7faa069 100644 --- a/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md +++ b/versioned_docs/version-2.6/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md @@ -88,7 +88,7 @@ A PrometheusRule allows you to define one or more RuleGroups. Each RuleGroup con - Labels that should be attached to the alert or record that identify it (e.g. cluster name or severity) - Annotations that encode any additional important pieces of information that need to be displayed on the notification for an alert (e.g. summary, description, message, runbook URL, etc.). This field is not required for recording rules. -On evaluating a [rule](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#rule), Prometheus will execute the provided PromQL query, add additional provided labels (or annotations - only for alerting rules), and execute the appropriate action for the rule. For example, an Alerting Rule that adds `team: front-end` as a label to the provided PromQL query will append that label to the fired alert, which will allow Alertmanager to forward the alert to the correct Receiver. +Upon evaluating a [rule](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#rule), Prometheus runs the provided PromQL query, adds the provided labels, and runs the appropriate action for the rule. If the rule triggers an alert, Prometheus also adds the provided annotations. For example, an Alerting Rule that adds `team: front-end` as a label to the provided PromQL query will append that label to the fired alert, which will allow Alertmanager to forward the alert to the correct Receiver. ### Alerting and Recording Rules diff --git a/versioned_docs/version-2.6/pages-for-subheaders/configuration-options.md b/versioned_docs/version-2.6/pages-for-subheaders/configuration-options.md index 92a375e948db..fdfc51d41bcb 100644 --- a/versioned_docs/version-2.6/pages-for-subheaders/configuration-options.md +++ b/versioned_docs/version-2.6/pages-for-subheaders/configuration-options.md @@ -26,7 +26,7 @@ For more information on Overlay Files, refer to the [Istio documentation.](https The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false` which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. -If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you must perform some additional configuration to continue to monitor your resources. For details, refer to [this section.](../integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md) diff --git a/versioned_docs/version-2.6/pages-for-subheaders/use-windows-clusters.md b/versioned_docs/version-2.6/pages-for-subheaders/use-windows-clusters.md index fcdcdd8949ac..1bd0d802d613 100644 --- a/versioned_docs/version-2.6/pages-for-subheaders/use-windows-clusters.md +++ b/versioned_docs/version-2.6/pages-for-subheaders/use-windows-clusters.md @@ -129,7 +129,7 @@ The `worker` nodes, which is where your workloads will be deployed on, will typi #### Recommended Architecture -We recommend the minimum three-node architecture listed in the table below, but you can always add additional Linux and Windows workers to scale up your cluster for redundancy: +We recommend the minimum three-node architecture listed in the table below, but you can always add more Linux and Windows workers to scale up your cluster for redundancy: | Node | Operating System | Kubernetes Cluster Role(s) | Purpose | | ------ | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | diff --git a/versioned_docs/version-2.6/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/versioned_docs/version-2.6/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index 53cc4bbaed40..31584a85c5f7 100644 --- a/versioned_docs/version-2.6/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/versioned_docs/version-2.6/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -74,7 +74,7 @@ To get your own custom application metrics into Prometheus, you have to collect ### ServiceMonitors and PodMonitors -Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. Many Helm charts let you create these monitors directly. You can also find more information in the Rancher documentation. +Once all of your workloads expose metrics in a Prometheus format, you must configure Prometheus to scrape them. Under the hood, Rancher uses the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add scraping targets with ServiceMonitors and PodMonitors. Many Helm charts already include an option to create these monitors directly. You can also find more information in the Rancher documentation. ### Prometheus Push Gateway diff --git a/versioned_docs/version-2.6/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md b/versioned_docs/version-2.6/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md index a2860a072774..ee1c2de162d1 100644 --- a/versioned_docs/version-2.6/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md +++ b/versioned_docs/version-2.6/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md @@ -104,7 +104,7 @@ Option to change the range of ports that can be used for [NodePort services](htt #### TLS Alternate Names -Add additional hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. +Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. #### Authorized Cluster Endpoint diff --git a/versioned_docs/version-2.6/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md b/versioned_docs/version-2.6/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md index 79f941d23ee5..637b895316a7 100644 --- a/versioned_docs/version-2.6/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md +++ b/versioned_docs/version-2.6/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md @@ -212,7 +212,7 @@ Option to change the range of ports that can be used for [NodePort services](htt #### TLS Alternate Names -Add additional hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. +Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. #### Authorized Cluster Endpoint diff --git a/versioned_docs/version-2.6/reference-guides/pipelines/pipeline-configuration.md b/versioned_docs/version-2.6/reference-guides/pipelines/pipeline-configuration.md index a387d2c248bc..4d177e970971 100644 --- a/versioned_docs/version-2.6/reference-guides/pipelines/pipeline-configuration.md +++ b/versioned_docs/version-2.6/reference-guides/pipelines/pipeline-configuration.md @@ -299,7 +299,7 @@ You can enable notifications to any notifiers based on the build status of a pip ::: -1. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for _Slack_, you can update which channel to send the notification to. You can add additional notifiers by clicking **Add Recipient**. +1. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for _Slack_, you can update which channel to send the notification to. You can add more notifiers by clicking **Add Recipient**. ### Configuring Notifications by YAML diff --git a/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index cba7f410591b..a5f50bbfaa86 100644 --- a/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.6/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -54,7 +54,7 @@ _New in v2.6.4_ You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections. -In addition to setting the default rules for a proxy server as shown above, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment. +In addition to setting the default rules for a proxy server as shown above, you must also add the following rules to provision node driver clusters from a proxied Rancher environment: You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`: diff --git a/versioned_docs/version-2.7/faq/technical-items.md b/versioned_docs/version-2.7/faq/technical-items.md index 0aed55b5d0db..8437ee3995ca 100644 --- a/versioned_docs/version-2.7/faq/technical-items.md +++ b/versioned_docs/version-2.7/faq/technical-items.md @@ -93,9 +93,9 @@ When the IP address of the node changed, Rancher lost connection to the node, so When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster. -### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? +### How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? -You can add additional arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rke-cluster-config-file-reference) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). +You can add more arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rke-cluster-config-file-reference) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). ### How do I check if my certificate chain is valid? diff --git a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md index b6cd651056a9..66698c79f01f 100644 --- a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md +++ b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md @@ -47,7 +47,7 @@ For information on enabling experimental features, refer to [this page.](../../. | `debug` | false | `bool` - set debug flag on rancher server | | `extraEnv` | [] | `list` - set additional environment variables for Rancher | | `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials | -| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. | +| `ingress.configurationSnippet` | "" | `string` - additional Nginx configuration. Can be used for proxy configuration. | | `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress | | `ingress.enabled` | true | When set to false, Helm will not install a Rancher ingress. Set the option to false to deploy your own ingress. | | `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. | | @@ -192,7 +192,7 @@ To learn more about how to configure environment variables, refer to [Define Env ### Additional Trusted CAs -If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add additional trusted CAs to Rancher. +If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add more trusted CAs to Rancher. ```plain --set additionalTrustedCAs=true diff --git a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md index baee092d0aee..9f3654619d37 100644 --- a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md +++ b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md @@ -145,7 +145,7 @@ sudo systemctl restart docker You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections. -In addition to setting the default rules for a proxy server, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment. +In addition to setting the default rules for a proxy server, you must also add the rules shown below to provision node driver clusters from a proxied Rancher environment. You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`: diff --git a/versioned_docs/version-2.7/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md b/versioned_docs/version-2.7/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md index 09624abc1bc0..3ca93936f1f2 100644 --- a/versioned_docs/version-2.7/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md +++ b/versioned_docs/version-2.7/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md @@ -23,7 +23,7 @@ title: 1. Enable Istio in the Cluster 1. If you have not already installed your own monitoring app, you will be prompted to install the rancher-monitoring app. Optional: Set your Selector or Scrape config options on rancher-monitoring app install. 1. Optional: Configure member access and [resource limits](../../../integrations-in-rancher/istio/cpu-and-memory-allocations.md) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio. 1. Optional: Make additional configuration changes to values.yaml if needed. -1. Optional: Add additional resources or configuration via the [overlay file.](../../../pages-for-subheaders/configuration-options.md#overlay-file) +1. Optional: Add further resources or configuration via the [overlay file](../../../pages-for-subheaders/configuration-options.md#overlay-file). 1. Click **Install**. **Result:** Istio is installed at the cluster level. diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md index 11bdcffc4c6f..4202912594f0 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md @@ -36,7 +36,7 @@ You must have an SSL certificate that Ingress can use to encrypt and decrypt com 1. Click **Add Certificate**. 1. Select a **Certificate - Secret Name** from the drop-down list. 1. Enter the host using encrypted communication. -1. To add additional hosts that use the certificate, click **Add Hosts**. +1. To add more hosts that use the same certificate, click **Add Hosts**. ## Labels and Annotations diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index c8a6da86cf2f..e2974c08d3f0 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -13,7 +13,7 @@ In order to leverage the template to create new VMs, Rancher has some [specific ## Requirements -There is specific tooling required for both Linux and Windows VMs to be usable by the vSphere node driver. The most critical dependency is [cloud-init](https://cloud-init.io/) for Linux and [cloudbase-init](https://cloudbase.it/cloudbase-init/) for Windows. Both of these are used for provisioning the VMs by configuring the hostname and by setting up the SSH access and the default Rancher user. Users can add additional content to these as desired if other configuration is needed. In addition, other requirements are listed below for reference. +There is specific tooling required for both Linux and Windows VMs to be usable by the vSphere node driver. The most critical dependency is [cloud-init](https://cloud-init.io/) for Linux and [cloudbase-init](https://cloudbase.it/cloudbase-init/) for Windows. Both of these are used for provisioning the VMs by configuring the hostname and by setting up the SSH access and the default Rancher user. Users can add more content to these as desired if other configuration is needed. In addition, other requirements are listed below for reference. :::note diff --git a/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md b/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md index 29b51149c972..ede0c2af94e7 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md @@ -10,7 +10,7 @@ The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=fals This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. -If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you must perform some additional configuration to continue to monitor your resources. ### Limiting Monitoring to Specific Namespaces by Setting ignoreNamespaceSelectors to True diff --git a/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md b/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md index d6d2ccd67e27..65707e0980ae 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md @@ -32,7 +32,7 @@ For detailed examples on using the match statement, see the [official documentat ### Filters -You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the `Flow` are applied in the order in the definition. +You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, such as adding data, transforming the logs, or parsing values from the records. The filters in the `Flow` are applied in the same order they appear in the definition. For a list of filters supported by the Logging operator, see [the official documentation on Fluentd filters](https://kube-logging.github.io/docs/configuration/plugins/filters/). diff --git a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md index bea67b1dc8f3..8f14e7faa069 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md @@ -88,7 +88,7 @@ A PrometheusRule allows you to define one or more RuleGroups. Each RuleGroup con - Labels that should be attached to the alert or record that identify it (e.g. cluster name or severity) - Annotations that encode any additional important pieces of information that need to be displayed on the notification for an alert (e.g. summary, description, message, runbook URL, etc.). This field is not required for recording rules. -On evaluating a [rule](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#rule), Prometheus will execute the provided PromQL query, add additional provided labels (or annotations - only for alerting rules), and execute the appropriate action for the rule. For example, an Alerting Rule that adds `team: front-end` as a label to the provided PromQL query will append that label to the fired alert, which will allow Alertmanager to forward the alert to the correct Receiver. +Upon evaluating a [rule](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#rule), Prometheus runs the provided PromQL query, adds the provided labels, and runs the appropriate action for the rule. If the rule triggers an alert, Prometheus also adds the provided annotations. For example, an Alerting Rule that adds `team: front-end` as a label to the provided PromQL query will append that label to the fired alert, which will allow Alertmanager to forward the alert to the correct Receiver. ### Alerting and Recording Rules diff --git a/versioned_docs/version-2.7/pages-for-subheaders/configuration-options.md b/versioned_docs/version-2.7/pages-for-subheaders/configuration-options.md index 92a375e948db..fdfc51d41bcb 100644 --- a/versioned_docs/version-2.7/pages-for-subheaders/configuration-options.md +++ b/versioned_docs/version-2.7/pages-for-subheaders/configuration-options.md @@ -26,7 +26,7 @@ For more information on Overlay Files, refer to the [Istio documentation.](https The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false` which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. -If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you must perform some additional configuration to continue to monitor your resources. For details, refer to [this section.](../integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md) diff --git a/versioned_docs/version-2.7/pages-for-subheaders/use-windows-clusters.md b/versioned_docs/version-2.7/pages-for-subheaders/use-windows-clusters.md index 36fe47c422f9..2e76e0ca874e 100644 --- a/versioned_docs/version-2.7/pages-for-subheaders/use-windows-clusters.md +++ b/versioned_docs/version-2.7/pages-for-subheaders/use-windows-clusters.md @@ -120,7 +120,7 @@ The `worker` nodes, which is where your workloads will be deployed on, will typi #### Recommended Architecture -We recommend the minimum three-node architecture listed in the table below, but you can always add additional Linux and Windows workers to scale up your cluster for redundancy: +We recommend the minimum three-node architecture listed in the table below, but you can always add more Linux and Windows workers to scale up your cluster for redundancy: | Node | Operating System | Kubernetes Cluster Role(s) | Purpose | | ------ | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | diff --git a/versioned_docs/version-2.7/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/versioned_docs/version-2.7/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index 00f421f760ed..3fd02858c9af 100644 --- a/versioned_docs/version-2.7/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/versioned_docs/version-2.7/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -74,7 +74,7 @@ To get your own custom application metrics into Prometheus, you have to collect ### ServiceMonitors and PodMonitors -Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. Many Helm charts let you create these monitors directly. You can also find more information in the Rancher documentation. +Once all of your workloads expose metrics in a Prometheus format, you must configure Prometheus to scrape them. Under the hood, Rancher uses the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add scraping targets with ServiceMonitors and PodMonitors. Many Helm charts already include an option to create these monitors directly. You can also find more information in the Rancher documentation. ### Prometheus Push Gateway diff --git a/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md b/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md index 906fac616e67..76f081eb0530 100644 --- a/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md +++ b/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md @@ -176,7 +176,7 @@ Truncating hostnames in a cluster improves compatibility with Windows-based syst ##### TLS Alternate Names -Add additional hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. +Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. ##### Authorized Cluster Endpoint diff --git a/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md b/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md index 27720506b136..5c48ec35e235 100644 --- a/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md +++ b/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md @@ -225,7 +225,7 @@ Truncating hostnames in a cluster improves compatibility with Windows-based syst ##### TLS Alternate Names -Add additional hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. +Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. ##### Authorized Cluster Endpoint diff --git a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 3bdfbf449d89..bc81ed966c2b 100644 --- a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -52,7 +52,7 @@ Privileged access is [required.](../../pages-for-subheaders/rancher-on-a-single- You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections. -In addition to setting the default rules for a proxy server as shown above, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment. +In addition to setting the default rules for a proxy server as shown above, you must also add the following rules to provision node driver clusters from a proxied Rancher environment: You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`: diff --git a/versioned_docs/version-2.8/faq/technical-items.md b/versioned_docs/version-2.8/faq/technical-items.md index 0aed55b5d0db..8437ee3995ca 100644 --- a/versioned_docs/version-2.8/faq/technical-items.md +++ b/versioned_docs/version-2.8/faq/technical-items.md @@ -93,9 +93,9 @@ When the IP address of the node changed, Rancher lost connection to the node, so When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster. -### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? +### How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? -You can add additional arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rke-cluster-config-file-reference) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). +You can add more arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rke-cluster-config-file-reference) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/). ### How do I check if my certificate chain is valid? diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md index c1602120ff3e..4cd024e45786 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/installation-references/helm-chart-options.md @@ -47,7 +47,7 @@ For information on enabling experimental features, refer to [this page.](../../. | `debug` | false | `bool` - set debug flag on rancher server | | `extraEnv` | [] | `list` - set additional environment variables for Rancher | | `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials | -| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. | +| `ingress.configurationSnippet` | "" | `string` - additional Nginx configuration. Can be used for proxy configuration. | | `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress | | `ingress.enabled` | true | When set to false, Helm will not install a Rancher ingress. Set the option to false to deploy your own ingress. | | `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. | | @@ -192,7 +192,7 @@ To learn more about how to configure environment variables, refer to [Define Env ### Additional Trusted CAs -If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add additional trusted CAs to Rancher. +If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add more trusted CAs to Rancher. ```plain --set additionalTrustedCAs=true diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md index baee092d0aee..9f3654619d37 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes.md @@ -145,7 +145,7 @@ sudo systemctl restart docker You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections. -In addition to setting the default rules for a proxy server, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment. +In addition to setting the default rules for a proxy server, you must also add the rules shown below to provision node driver clusters from a proxied Rancher environment. You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`: diff --git a/versioned_docs/version-2.8/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md b/versioned_docs/version-2.8/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md index 09624abc1bc0..3ca93936f1f2 100644 --- a/versioned_docs/version-2.8/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md +++ b/versioned_docs/version-2.8/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster.md @@ -23,7 +23,7 @@ title: 1. Enable Istio in the Cluster 1. If you have not already installed your own monitoring app, you will be prompted to install the rancher-monitoring app. Optional: Set your Selector or Scrape config options on rancher-monitoring app install. 1. Optional: Configure member access and [resource limits](../../../integrations-in-rancher/istio/cpu-and-memory-allocations.md) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio. 1. Optional: Make additional configuration changes to values.yaml if needed. -1. Optional: Add additional resources or configuration via the [overlay file.](../../../pages-for-subheaders/configuration-options.md#overlay-file) +1. Optional: Add further resources or configuration via the [overlay file](../../../pages-for-subheaders/configuration-options.md#overlay-file). 1. Click **Install**. **Result:** Istio is installed at the cluster level. diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md index 11bdcffc4c6f..4202912594f0 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration.md @@ -36,7 +36,7 @@ You must have an SSL certificate that Ingress can use to encrypt and decrypt com 1. Click **Add Certificate**. 1. Select a **Certificate - Secret Name** from the drop-down list. 1. Enter the host using encrypted communication. -1. To add additional hosts that use the certificate, click **Add Hosts**. +1. To add more hosts that use the same certificate, click **Add Hosts**. ## Labels and Annotations diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index c8a6da86cf2f..e2974c08d3f0 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -13,7 +13,7 @@ In order to leverage the template to create new VMs, Rancher has some [specific ## Requirements -There is specific tooling required for both Linux and Windows VMs to be usable by the vSphere node driver. The most critical dependency is [cloud-init](https://cloud-init.io/) for Linux and [cloudbase-init](https://cloudbase.it/cloudbase-init/) for Windows. Both of these are used for provisioning the VMs by configuring the hostname and by setting up the SSH access and the default Rancher user. Users can add additional content to these as desired if other configuration is needed. In addition, other requirements are listed below for reference. +There is specific tooling required for both Linux and Windows VMs to be usable by the vSphere node driver. The most critical dependency is [cloud-init](https://cloud-init.io/) for Linux and [cloudbase-init](https://cloudbase.it/cloudbase-init/) for Windows. Both of these are used for provisioning the VMs by configuring the hostname and by setting up the SSH access and the default Rancher user. Users can add more content to these as desired if other configuration is needed. In addition, other requirements are listed below for reference. :::note diff --git a/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md b/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md index 29b51149c972..ede0c2af94e7 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md @@ -10,7 +10,7 @@ The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=fals This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. -If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you must perform some additional configuration to continue to monitor your resources. ### Limiting Monitoring to Specific Namespaces by Setting ignoreNamespaceSelectors to True diff --git a/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md b/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md index d6d2ccd67e27..65707e0980ae 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows.md @@ -32,7 +32,7 @@ For detailed examples on using the match statement, see the [official documentat ### Filters -You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the `Flow` are applied in the order in the definition. +You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, such as adding data, transforming the logs, or parsing values from the records. The filters in the `Flow` are applied in the same order they appear in the definition. For a list of filters supported by the Logging operator, see [the official documentation on Fluentd filters](https://kube-logging.github.io/docs/configuration/plugins/filters/). diff --git a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md index bea67b1dc8f3..8f14e7faa069 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md @@ -88,7 +88,7 @@ A PrometheusRule allows you to define one or more RuleGroups. Each RuleGroup con - Labels that should be attached to the alert or record that identify it (e.g. cluster name or severity) - Annotations that encode any additional important pieces of information that need to be displayed on the notification for an alert (e.g. summary, description, message, runbook URL, etc.). This field is not required for recording rules. -On evaluating a [rule](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#rule), Prometheus will execute the provided PromQL query, add additional provided labels (or annotations - only for alerting rules), and execute the appropriate action for the rule. For example, an Alerting Rule that adds `team: front-end` as a label to the provided PromQL query will append that label to the fired alert, which will allow Alertmanager to forward the alert to the correct Receiver. +Upon evaluating a [rule](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#rule), Prometheus runs the provided PromQL query, adds the provided labels, and runs the appropriate action for the rule. If the rule triggers an alert, Prometheus also adds the provided annotations. For example, an Alerting Rule that adds `team: front-end` as a label to the provided PromQL query will append that label to the fired alert, which will allow Alertmanager to forward the alert to the correct Receiver. ### Alerting and Recording Rules diff --git a/versioned_docs/version-2.8/pages-for-subheaders/configuration-options.md b/versioned_docs/version-2.8/pages-for-subheaders/configuration-options.md index 92a375e948db..fdfc51d41bcb 100644 --- a/versioned_docs/version-2.8/pages-for-subheaders/configuration-options.md +++ b/versioned_docs/version-2.8/pages-for-subheaders/configuration-options.md @@ -26,7 +26,7 @@ For more information on Overlay Files, refer to the [Istio documentation.](https The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false` which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. -If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you must perform some additional configuration to continue to monitor your resources. For details, refer to [this section.](../integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md) diff --git a/versioned_docs/version-2.8/pages-for-subheaders/use-windows-clusters.md b/versioned_docs/version-2.8/pages-for-subheaders/use-windows-clusters.md index 36fe47c422f9..2e76e0ca874e 100644 --- a/versioned_docs/version-2.8/pages-for-subheaders/use-windows-clusters.md +++ b/versioned_docs/version-2.8/pages-for-subheaders/use-windows-clusters.md @@ -120,7 +120,7 @@ The `worker` nodes, which is where your workloads will be deployed on, will typi #### Recommended Architecture -We recommend the minimum three-node architecture listed in the table below, but you can always add additional Linux and Windows workers to scale up your cluster for redundancy: +We recommend the minimum three-node architecture listed in the table below, but you can always add more Linux and Windows workers to scale up your cluster for redundancy: | Node | Operating System | Kubernetes Cluster Role(s) | Purpose | | ------ | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | diff --git a/versioned_docs/version-2.8/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/versioned_docs/version-2.8/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index 00f421f760ed..3fd02858c9af 100644 --- a/versioned_docs/version-2.8/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/versioned_docs/version-2.8/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -74,7 +74,7 @@ To get your own custom application metrics into Prometheus, you have to collect ### ServiceMonitors and PodMonitors -Once all your workloads expose metrics in a Prometheus format, you have to configure Prometheus to scrape it. Under the hood Rancher is using the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add additional scraping targets with ServiceMonitors and PodMonitors. Many Helm charts let you create these monitors directly. You can also find more information in the Rancher documentation. +Once all of your workloads expose metrics in a Prometheus format, you must configure Prometheus to scrape them. Under the hood, Rancher uses the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator). This makes it easy to add scraping targets with ServiceMonitors and PodMonitors. Many Helm charts already include an option to create these monitors directly. You can also find more information in the Rancher documentation. ### Prometheus Push Gateway diff --git a/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md b/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md index ca377342025d..a5c22997898d 100644 --- a/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md +++ b/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md @@ -176,7 +176,7 @@ Truncating hostnames in a cluster improves compatibility with Windows-based syst ##### TLS Alternate Names -Add additional hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. +Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. ##### Authorized Cluster Endpoint diff --git a/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md b/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md index 402d426b02c3..a2a510c71459 100644 --- a/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md +++ b/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md @@ -225,7 +225,7 @@ Truncating hostnames in a cluster improves compatibility with Windows-based syst ##### TLS Alternate Names -Add additional hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. +Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert. ##### Authorized Cluster Endpoint diff --git a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 3bdfbf449d89..bc81ed966c2b 100644 --- a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -52,7 +52,7 @@ Privileged access is [required.](../../pages-for-subheaders/rancher-on-a-single- You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections. -In addition to setting the default rules for a proxy server as shown above, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment. +In addition to setting the default rules for a proxy server as shown above, you must also add the following rules to provision node driver clusters from a proxied Rancher environment: You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`: From b482e74b2906b04c865e269616256a653dfb628b Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 7 Dec 2023 10:06:15 -0500 Subject: [PATCH 59/65] rm deprecation banner from v2.8 --- docusaurus.config.js | 1 + 1 file changed, 1 insertion(+) diff --git a/docusaurus.config.js b/docusaurus.config.js index c99708cecfd2..c8bfef2d7946 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -187,6 +187,7 @@ module.exports = { 2.8: { label: 'v2.8', path: 'v2.8', + banner: 'none' }, 2.7: { label: 'v2.7', From e9b3fe89a96bb4f32cc73ddb351b155bb4d49213 Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Thu, 7 Dec 2023 10:10:35 -0500 Subject: [PATCH 60/65] rm deprecation banner from v2.8 (#1023) --- docusaurus.config.js | 1 + 1 file changed, 1 insertion(+) diff --git a/docusaurus.config.js b/docusaurus.config.js index c99708cecfd2..c8bfef2d7946 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -187,6 +187,7 @@ module.exports = { 2.8: { label: 'v2.8', path: 'v2.8', + banner: 'none' }, 2.7: { label: 'v2.7', From e45997ac25a72f5fa5bce02909f0edf3768b348a Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Thu, 7 Dec 2023 07:28:54 -0800 Subject: [PATCH 61/65] [skip ci] Update README to reflect current versioning model (#1022) * [skip ci] Update README to reflect current versioning model * Update README.md * Update README.md --------- Co-authored-by: Marty Hernandez Avedon --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 1812e0d86f75..10b1eb7626df 100644 --- a/README.md +++ b/README.md @@ -15,9 +15,11 @@ To get started, [fork](https://github.com/rancher/rancher-docs/fork) and clone t Our repository doesn't allow you to make changes directly to the `main` branch. Create a working branch and make pull requests from your fork to [rancher/rancher-docs](https://github.com/rancher/rancher-docs). -For most updates, you'll need to edit a file in `/docs`, and the corresponding file in `/versioned_docs/version-2.7`. If a change affects older versions, you can find files documenting Rancher v2.0 and later in the `/versioned_docs` directory. +For most updates, you'll need to edit a file in the `/docs` directory, which represents the ["Latest"](https://ranchermanager.docs.rancher.com/) version of our published documentation. The "Latest" version is a mirror of the most recently released version of Rancher. As of December 2023, the most recently released version of Rancher is 2.8. -If a file is moved or renamed, you'll also need to edit the `sidebars.js` files for each version, and the list of redirects in `docusaurus.config.js`. See [Moving or Renaming Docs](./moving-or-renaming-docs.md). +Whenever an update is made to `/docs`, you should apply the same change to the corresponding file in `/versioned_docs/version-2.8`. If a change only affects older versions, you don't need to mirror it to the `/docs` directory. + +If a file is moved or renamed, you'll also need to edit the `sidebars.js` files for each affected version, as well as the list of redirects in `docusaurus.config.js`. See [Moving or Renaming Docs](./moving-or-renaming-docs.md). ### Navigate the Repo From b2fdf7554f95de26a0ea82cbe603ae0928a69e3f Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Fri, 8 Dec 2023 10:27:21 -0500 Subject: [PATCH 62/65] Apply suggestions from code review --- .../vsphere/create-a-vm-template.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index 9b48cde09665..793e9a338daa 100644 --- a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -24,8 +24,8 @@ If you have any specific firewall rules or configuration, you will need to add t ## Linux Dependencies The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; some distributions ship these by default, for example. -These dependencies are souly what is required for Rancher's cluster provisioner to work. -Additional dependencies required by Kubernetes will be installed automatically by the cluster provisioner. +These dependencies are required for the functioning of the Rancher cluster provisioner. +The cluster provisioner automatically installs additional dependencies required for Kubernetes. * curl * wget From 988c09001b5eaf8c9b3af8c3301197697e54a0c4 Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Fri, 8 Dec 2023 10:44:05 -0500 Subject: [PATCH 63/65] Update create-a-vm-template.md --- .../vsphere/create-a-vm-template.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index 793e9a338daa..e410449d5628 100644 --- a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -23,9 +23,7 @@ If you have any specific firewall rules or configuration, you will need to add t ## Linux Dependencies -The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; some distributions ship these by default, for example. -These dependencies are required for the functioning of the Rancher cluster provisioner. -The cluster provisioner automatically installs additional dependencies required for Kubernetes. +The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; for example, some distributions ship these dependencies by default. The dependencies listed here are required for the functioning of the Rancher cluster provisioner. The cluster provisioner automatically installs additional dependencies required for Kubernetes: * curl * wget From 49f2cf324064ed33861745f30ef32f6f84c5a4ce Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Andreas=20Lindh=C3=A9?= Date: Fri, 8 Dec 2023 16:49:47 +0100 Subject: [PATCH 64/65] Update create-a-vm-template.md --- .../vsphere/create-a-vm-template.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index e410449d5628..ec4622a92e11 100644 --- a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -23,7 +23,7 @@ If you have any specific firewall rules or configuration, you will need to add t ## Linux Dependencies -The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; for example, some distributions ship these dependencies by default. The dependencies listed here are required for the functioning of the Rancher cluster provisioner. The cluster provisioner automatically installs additional dependencies required for Kubernetes: +The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; for example, some distributions ship these dependencies by default. The cluster provisioner will automatically install the dependencies required for Kubernetes. The dependencies listed below are required for the functioning of the Rancher cluster provisioner (not for Kubernetes): * curl * wget From b368a24b5a0e5916d085ba03dd292dd785fa81c1 Mon Sep 17 00:00:00 2001 From: martyav Date: Fri, 8 Dec 2023 11:15:05 -0500 Subject: [PATCH 65/65] versioning --- .../vsphere/create-a-vm-template.md | 2 +- .../vsphere/create-a-vm-template.md | 2 +- .../vsphere/create-a-vm-template.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index e2974c08d3f0..cfdd8da1f199 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -23,7 +23,7 @@ If you have any specific firewall rules or configuration, you will need to add t ## Linux Dependencies -The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; some distributions ship these by default, for example. +The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; for example, some distributions ship these dependencies by default. The cluster provisioner will automatically install the dependencies required for Kubernetes. The dependencies listed below are required for the functioning of the Rancher cluster provisioner (not for Kubernetes): * curl * wget diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index e2974c08d3f0..cfdd8da1f199 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -23,7 +23,7 @@ If you have any specific firewall rules or configuration, you will need to add t ## Linux Dependencies -The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; some distributions ship these by default, for example. +The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; for example, some distributions ship these dependencies by default. The cluster provisioner will automatically install the dependencies required for Kubernetes. The dependencies listed below are required for the functioning of the Rancher cluster provisioner (not for Kubernetes): * curl * wget diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md index e2974c08d3f0..cfdd8da1f199 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template.md @@ -23,7 +23,7 @@ If you have any specific firewall rules or configuration, you will need to add t ## Linux Dependencies -The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; some distributions ship these by default, for example. +The packages that need to be installed on the template are listed below. These will have slightly different names based on distribution; for example, some distributions ship these dependencies by default. The cluster provisioner will automatically install the dependencies required for Kubernetes. The dependencies listed below are required for the functioning of the Rancher cluster provisioner (not for Kubernetes): * curl * wget