From bdd313771cb37aa2e73a9ab4b137050cb8284ec0 Mon Sep 17 00:00:00 2001 From: Rich Loveland Date: Mon, 29 Jul 2024 10:42:43 -0400 Subject: [PATCH] Fix hardcoded link versions: v23.1 through v24.2 (#18767) --- .../_includes/v23.1/backups/serverless-locality-aware.md | 2 +- .../_includes/v23.2/backups/serverless-locality-aware.md | 2 +- .../_includes/v24.1/backups/serverless-locality-aware.md | 2 +- .../_includes/v24.2/backups/serverless-locality-aware.md | 2 +- src/current/cockroachcloud/backup-and-restore-overview.md | 2 +- src/current/cockroachcloud/cmek.md | 2 +- src/current/v23.2/operational-faqs.md | 2 +- src/current/v23.2/start-a-local-cluster.md | 4 ++-- src/current/v24.1/work-with-virtual-clusters.md | 4 ++-- src/current/v24.2/cockroach-start.md | 2 +- src/current/v24.2/operational-faqs.md | 2 +- src/current/v24.2/start-a-local-cluster.md | 2 +- src/current/v24.2/work-with-virtual-clusters.md | 4 ++-- 13 files changed, 16 insertions(+), 16 deletions(-) diff --git a/src/current/_includes/v23.1/backups/serverless-locality-aware.md b/src/current/_includes/v23.1/backups/serverless-locality-aware.md index 18110607e3a..eef7701884e 100644 --- a/src/current/_includes/v23.1/backups/serverless-locality-aware.md +++ b/src/current/_includes/v23.1/backups/serverless-locality-aware.md @@ -1 +1 @@ -CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.1/data-domiciling.md %}) requirements. \ No newline at end of file +CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with {% if page.title contains "Cloud" or page.title contains "Serverless" %} [data domiciling]({% link {{site.current_cloud_version}}/data-domiciling.md %}) {% else %} [data domiciling]({% link {{page.version.version}}/data-domiciling.md %}) {% endif %} requirements. diff --git a/src/current/_includes/v23.2/backups/serverless-locality-aware.md b/src/current/_includes/v23.2/backups/serverless-locality-aware.md index 456ee436781..eef7701884e 100644 --- a/src/current/_includes/v23.2/backups/serverless-locality-aware.md +++ b/src/current/_includes/v23.2/backups/serverless-locality-aware.md @@ -1 +1 @@ -CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.2/data-domiciling.md %}) requirements. \ No newline at end of file +CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with {% if page.title contains "Cloud" or page.title contains "Serverless" %} [data domiciling]({% link {{site.current_cloud_version}}/data-domiciling.md %}) {% else %} [data domiciling]({% link {{page.version.version}}/data-domiciling.md %}) {% endif %} requirements. diff --git a/src/current/_includes/v24.1/backups/serverless-locality-aware.md b/src/current/_includes/v24.1/backups/serverless-locality-aware.md index 456ee436781..eef7701884e 100644 --- a/src/current/_includes/v24.1/backups/serverless-locality-aware.md +++ b/src/current/_includes/v24.1/backups/serverless-locality-aware.md @@ -1 +1 @@ -CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.2/data-domiciling.md %}) requirements. \ No newline at end of file +CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with {% if page.title contains "Cloud" or page.title contains "Serverless" %} [data domiciling]({% link {{site.current_cloud_version}}/data-domiciling.md %}) {% else %} [data domiciling]({% link {{page.version.version}}/data-domiciling.md %}) {% endif %} requirements. diff --git a/src/current/_includes/v24.2/backups/serverless-locality-aware.md b/src/current/_includes/v24.2/backups/serverless-locality-aware.md index 456ee436781..eef7701884e 100644 --- a/src/current/_includes/v24.2/backups/serverless-locality-aware.md +++ b/src/current/_includes/v24.2/backups/serverless-locality-aware.md @@ -1 +1 @@ -CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.2/data-domiciling.md %}) requirements. \ No newline at end of file +CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with {% if page.title contains "Cloud" or page.title contains "Serverless" %} [data domiciling]({% link {{site.current_cloud_version}}/data-domiciling.md %}) {% else %} [data domiciling]({% link {{page.version.version}}/data-domiciling.md %}) {% endif %} requirements. diff --git a/src/current/cockroachcloud/backup-and-restore-overview.md b/src/current/cockroachcloud/backup-and-restore-overview.md index 3ff856c965c..3eaaa5a43fb 100644 --- a/src/current/cockroachcloud/backup-and-restore-overview.md +++ b/src/current/cockroachcloud/backup-and-restore-overview.md @@ -217,4 +217,4 @@ For practical examples of running backup and restore jobs, watch the following v - Considerations for using [backup](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/backup#considerations) and [restore](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/restore#considerations). - [Backup collections](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-full-and-incremental-backups#backup-collections) for details on how CockroachDB stores backups. -- [Restoring backups](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/restoring-backups-across-versions) across major versions of CockroachDB. \ No newline at end of file +- [Restoring backups](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/restoring-backups-across-versions) across major versions of CockroachDB. diff --git a/src/current/cockroachcloud/cmek.md b/src/current/cockroachcloud/cmek.md index eb4603ff144..8aed789afa8 100644 --- a/src/current/cockroachcloud/cmek.md +++ b/src/current/cockroachcloud/cmek.md @@ -35,7 +35,7 @@ This section describes some of the ways that CMEK can help you protect your data {{site.data.alerts.end}} -- **Enforcement of data domiciling and locality requirements**: In a multi-region cluster, you can confine an individual database to a single region or multiple regions. For more information and limitations, see [Data Domiciling with CockroachDB](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/data-domiciling). When you enable CMEK on a multi-region cluster, you can optionally assign a separate CMEK key to each region, or use the same CMEK key for multiple related regions. +- **Enforcement of data domiciling and locality requirements**: In a multi-region cluster, you can confine an individual database to a single region or multiple regions. For more information and limitations, see [Data Domiciling with CockroachDB]({% link {{site.current_cloud_version}}/data-domiciling.md %}). When you enable CMEK on a multi-region cluster, you can optionally assign a separate CMEK key to each region, or use the same CMEK key for multiple related regions. - **Enforcement of encryption requirements**: With CMEK, you have control the CMEK key's encryption strength. The CMEK key's size is determined by what your KMS provider supports. You can use your KMS platform's controls to configure the regions where the CMEK key is available, enable automatic rotation schedules for CMEK keys, and view audit logs that show each time the CMEK key is used by CockroachDB {{ site.data.products.cloud }}. CockroachDB {{ site.data.products.cloud }} does not need any visibility into these details. diff --git a/src/current/v23.2/operational-faqs.md b/src/current/v23.2/operational-faqs.md index d482b34322d..6fa24aa8336 100644 --- a/src/current/v23.2/operational-faqs.md +++ b/src/current/v23.2/operational-faqs.md @@ -9,7 +9,7 @@ docs_area: get_started ## Why is my process hanging when I try to start nodes with the `--background` flag? {{site.data.alerts.callout_info}} -Cockroach Labs recommends against using the `--background` flag when starting a cluster. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd). When testing locally, starting nodes in the foreground is recommended so you can monitor the runtime closely. +Cockroach Labs recommends against using the `--background` flag when starting a cluster. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd). When testing locally, starting nodes in the foreground is recommended so you can monitor the runtime closely. If you do use `--background`, you should also set `--pid-file`. To stop or restart a cluster, send `SIGTERM` or `SIGHUP` signal to the process ID in the PID file. {{site.data.alerts.end}} diff --git a/src/current/v23.2/start-a-local-cluster.md b/src/current/v23.2/start-a-local-cluster.md index c16ef3d12a1..170935f3bca 100644 --- a/src/current/v23.2/start-a-local-cluster.md +++ b/src/current/v23.2/start-a-local-cluster.md @@ -26,7 +26,7 @@ The store directory is `cockroach-data/` in the same directory as the `cockroach ## Step 1. Start the cluster -This section shows how to start a cluster interactively. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd). +This section shows how to start a cluster interactively. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd). 1. Use the [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}) command to start the `node1` in the foreground: @@ -43,7 +43,7 @@ This section shows how to start a cluster interactively. In production, operator {{site.data.alerts.callout_info}} The `--background` flag is not recommended. If you decide to start nodes in the background, you must also pass the `--pid-file` argument. To stop a `cockroach` process running in the background, extract the process ID from the PID file and pass it to the command to [stop the node](#step-7-stop-the-cluster). - In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd). + In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd). {{site.data.alerts.end}} You'll see a message like the following: diff --git a/src/current/v24.1/work-with-virtual-clusters.md b/src/current/v24.1/work-with-virtual-clusters.md index bb67b3d1c49..b7c16035c0b 100644 --- a/src/current/v24.1/work-with-virtual-clusters.md +++ b/src/current/v24.1/work-with-virtual-clusters.md @@ -84,8 +84,8 @@ To connect to the system virtual cluster using the DB Console, add the `GET` URL To [grant]({% link {{ page.version.version }}/grant.md %}) access to the system virtual cluster, you must connect to the system virtual cluster as a user with the `admin` role, then grant either of the following to the SQL user: -- The `admin` [role]({% link v23.2/security-reference/authorization.md %}#admin-role) grants the ability to read and modify system tables and cluster settings on any virtual cluster, including the system virtual cluster. -- The `VIEWSYSTEMDATA` [system privilege]({% link v23.2/security-reference/authorization.md %}#supported-privileges) grants the ability to read system tables and cluster settings on any virtual cluster, including the system virtual cluster. +- The `admin` [role]({% link {{page.version.version}}/security-reference/authorization.md %}#admin-role) grants the ability to read and modify system tables and cluster settings on any virtual cluster, including the system virtual cluster. +- The `VIEWSYSTEMDATA` [system privilege]({% link {{page.version.version}}/security-reference/authorization.md %}#supported-privileges) grants the ability to read system tables and cluster settings on any virtual cluster, including the system virtual cluster. To prevent unauthorized access, you should limit the users with access to the system virtual cluster. diff --git a/src/current/v24.2/cockroach-start.md b/src/current/v24.2/cockroach-start.md index 346699e8fbd..b668d4f83fb 100644 --- a/src/current/v24.2/cockroach-start.md +++ b/src/current/v24.2/cockroach-start.md @@ -263,7 +263,7 @@ Therefore, if you enable WAL failover, you must also update your [logging]({% li - (**Recommended**) Configure [remote log sinks]({% link {{page.version.version}}/logging-use-cases.md %}#network-logging) that are not correlated with the availability of your cluster's local disks. - If you must log to local disks: 1. Disable [audit logging]({% link {{ page.version.version }}/sql-audit-logging.md %}). File-based audit logging and the WAL failover feature cannot coexist. File-based audit logging provides guarantees that every log message makes it to disk, otherwise CockroachDB needs to shut down. Because of this, resuming operations in the face of disk unavailability is not compatible with audit logging. - 1. Enable asynchronous buffering of [`file-groups` log sinks]({% link {{ page.version.version }}/configure-logs.md %}#output-to-files) using the `buffering` configuration option. The `buffering` configuration can be applied to [`file-defaults`]({% link {{ page.version.version }}/configure-logs.md %}#configure-logging-defaults) or individual `file-groups` as needed. Note that enabling asynchronous buffering of `file-groups` log sinks is in [preview]({% link v24.1/cockroachdb-feature-availability.md %}#features-in-preview). + 1. Enable asynchronous buffering of [`file-groups` log sinks]({% link {{ page.version.version }}/configure-logs.md %}#output-to-files) using the `buffering` configuration option. The `buffering` configuration can be applied to [`file-defaults`]({% link {{ page.version.version }}/configure-logs.md %}#configure-logging-defaults) or individual `file-groups` as needed. Note that enabling asynchronous buffering of `file-groups` log sinks is in [preview]({% link {{page.version.version}}/cockroachdb-feature-availability.md %}#features-in-preview). 1. Set `max-staleness: 1s` and `flush-trigger-size: 256KiB`. 1. When `buffering` is enabled, `buffered-writes` must be explicitly disabled as shown below. This is necessary because `buffered-writes` does not provide true asynchronous disk access, but rather a small buffer. If the small buffer fills up, it can cause internal routines performing logging operations to hang. This in turn will cause internal routines doing other important work to hang, potentially affecting cluster stability. 1. The recommended logging configuration for using file-based logging with WAL failover is as follows: diff --git a/src/current/v24.2/operational-faqs.md b/src/current/v24.2/operational-faqs.md index d482b34322d..6fa24aa8336 100644 --- a/src/current/v24.2/operational-faqs.md +++ b/src/current/v24.2/operational-faqs.md @@ -9,7 +9,7 @@ docs_area: get_started ## Why is my process hanging when I try to start nodes with the `--background` flag? {{site.data.alerts.callout_info}} -Cockroach Labs recommends against using the `--background` flag when starting a cluster. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd). When testing locally, starting nodes in the foreground is recommended so you can monitor the runtime closely. +Cockroach Labs recommends against using the `--background` flag when starting a cluster. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd). When testing locally, starting nodes in the foreground is recommended so you can monitor the runtime closely. If you do use `--background`, you should also set `--pid-file`. To stop or restart a cluster, send `SIGTERM` or `SIGHUP` signal to the process ID in the PID file. {{site.data.alerts.end}} diff --git a/src/current/v24.2/start-a-local-cluster.md b/src/current/v24.2/start-a-local-cluster.md index 97dbfb46a23..55d59491f21 100644 --- a/src/current/v24.2/start-a-local-cluster.md +++ b/src/current/v24.2/start-a-local-cluster.md @@ -26,7 +26,7 @@ The store directory is `cockroach-data/` in the same directory as the `cockroach ## Step 1. Start the cluster -This section shows how to start a cluster interactively. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd). +This section shows how to start a cluster interactively. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd). 1. Use the [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}) command to start the `node1` in the foreground: diff --git a/src/current/v24.2/work-with-virtual-clusters.md b/src/current/v24.2/work-with-virtual-clusters.md index 49545dfe221..507c01bd8d4 100644 --- a/src/current/v24.2/work-with-virtual-clusters.md +++ b/src/current/v24.2/work-with-virtual-clusters.md @@ -84,8 +84,8 @@ To connect to the system virtual cluster using the DB Console, add the `GET` URL To [grant]({% link {{ page.version.version }}/grant.md %}) access to the system virtual cluster, you must connect to the system virtual cluster as a user with the `admin` role, then grant either of the following to the SQL user: -- The `admin` [role]({% link v23.2/security-reference/authorization.md %}#admin-role) grants the ability to read and modify system tables and cluster settings on any virtual cluster, including the system virtual cluster. -- The `VIEWSYSTEMDATA` [system privilege]({% link v23.2/security-reference/authorization.md %}#supported-privileges) grants the ability to read system tables and cluster settings on any virtual cluster, including the system virtual cluster. +- The `admin` [role]({% link {{page.version.version}}/security-reference/authorization.md %}#admin-role) grants the ability to read and modify system tables and cluster settings on any virtual cluster, including the system virtual cluster. +- The `VIEWSYSTEMDATA` [system privilege]({% link {{page.version.version}}/security-reference/authorization.md %}#supported-privileges) grants the ability to read system tables and cluster settings on any virtual cluster, including the system virtual cluster. To prevent unauthorized access, you should limit the users with access to the system virtual cluster.