From 2e29c5a754297dbe13e910657dbcfeaad22418e7 Mon Sep 17 00:00:00 2001 From: Kathryn May Date: Thu, 14 Nov 2024 13:28:01 -0500 Subject: [PATCH] Rich's feedback --- .../v24.3/ldr/show-logical-replication-responses.md | 2 +- src/current/v24.3/logical-data-replication-monitoring.md | 8 ++++---- src/current/v24.3/logical-data-replication-overview.md | 8 ++++---- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/src/current/_includes/v24.3/ldr/show-logical-replication-responses.md b/src/current/_includes/v24.3/ldr/show-logical-replication-responses.md index 3b8fce3b67c..9844b431a50 100644 --- a/src/current/_includes/v24.3/ldr/show-logical-replication-responses.md +++ b/src/current/_includes/v24.3/ldr/show-logical-replication-responses.md @@ -3,7 +3,7 @@ Field | Response `job_id` | The job's ID. Use with [`CANCEL JOB`]({% link {{ page.version.version }}/cancel-job.md %}), [`PAUSE JOB`]({% link {{ page.version.version }}/pause-job.md %}), [`RESUME JOB`]({% link {{ page.version.version }}/resume-job.md %}), [`SHOW JOB`]({% link {{ page.version.version }}/show-jobs.md %}). `status` | Status of the job `running`, `paused`, `canceled`. {% comment %}check these{% endcomment %} `targets` | The fully qualified name of the table(s) that are part of the LDR job. -`replicated_time` | The latest timestamp at which the destination cluster has consistent data. This time advances automatically as long as the LDR job proceeds without error. `replicated_time` is updated periodically (every 30s). {% comment %}To confirm this line is accurate{% endcomment %} +`replicated_time` | The latest [timestamp]({% link {{ page.version.version }}/timestamp.md %}) at which the destination cluster has consistent data. This time advances automatically as long as the LDR job proceeds without error. `replicated_time` is updated periodically (every 30s). {% comment %}To confirm this line is accurate{% endcomment %} `replication_start_time` | The start time of the LDR job. `conflict_resolution_type` | The type of [conflict resolution]({% link {{ page.version.version }}/manage-logical-data-replication.md %}#conflict-resolution): `LWW` last write wins. `description` | Description of the job including the replicating table(s) and the source cluster connection. diff --git a/src/current/v24.3/logical-data-replication-monitoring.md b/src/current/v24.3/logical-data-replication-monitoring.md index cf54b302765..a9071fa3c6a 100644 --- a/src/current/v24.3/logical-data-replication-monitoring.md +++ b/src/current/v24.3/logical-data-replication-monitoring.md @@ -61,7 +61,7 @@ SHOW LOGICAL REPLICATION JOBS WITH details; ## Recommended LDR metrics to track -- Replication latency: The commit-to-commit replication latency, which is tracked from when a row is committed on the source cluster, to when it is "committed" on the destination cluster. A _commit_ is when the LDR job either adds a row to the [dead letter queue (DLQ)]({% link {{ page.version.version }}/manage-logical-data-replication.md %}#dead-letter-queue-dlq) or applies a row successfully to the destination cluster. +- Replication latency: The commit-to-commit replication latency, which is tracked from when a row is committed on the source cluster, to when it is applied on the destination cluster. An LDR _commit_ is when the job either applies a row successfully to the destination cluster or adds a row to the [dead letter queue (DLQ)]({% link {{ page.version.version }}/manage-logical-data-replication.md %}#dead-letter-queue-dlq). - `logical_replication.commit_latency-p50` - `logical_replication.commit_latency-p99` - Replication lag: How far behind the source cluster is from the destination cluster at a specific point in time. The replication lag is equivalent to [RPO]({% link {{ page.version.version }}/disaster-recovery-overview.md %}) during a disaster. Calculate the replication lag with this metric. For example, `time.now() - replicated_time_seconds`. @@ -77,11 +77,11 @@ In the DB Console, you can use: - The [**Metrics** dashboard]({% link {{ page.version.version }}/ui-overview-dashboard.md %}) for LDR to view metrics for the job on the destination cluster. - The [**Jobs** page]({% link {{ page.version.version }}/ui-jobs-page.md %}) to view the history retention job on the source cluster and the LDR job on the destination cluster -The metrics for LDR in the DB Console metrics are at the **cluster** level. This means that if there are multiple LDR jobs running on a cluster the DB Console will either show the average metrics across jobs. +The metrics for LDR in the DB Console metrics are at the **cluster** level. This means that if there are multiple LDR jobs running on a cluster the DB Console will show the average metrics across jobs. ### Metrics dashboard -You can use the **Logical Data Replication** dashboard of the destination cluster to monitor the following metric graphs at the **cluster** level: +You can use the [**Logical Data Replication** dashboard]({% link {{ page.version.version }}/ui-overview-dashboard.md %}) of the destination cluster to monitor the following metric graphs at the **cluster** level: - Replication latency - Replication lag @@ -98,7 +98,7 @@ To track replicated time, ingested events, and events added to the DLQ at the ** ### Jobs page -On the **Jobs** page, select: +On the [**Jobs** page]({% link {{ page.version.version }}/ui-jobs-page.md %}), select: - The **Replication Producer** in the source cluster's DB Console to view the _history retention job_. - The **Logical Replication Ingestion** job in the destination cluster's DB Console. When you start LDR, the **Logical Replication Ingestion** job will show a bar that tracks the initial scan progress of the source table's existing data. diff --git a/src/current/v24.3/logical-data-replication-overview.md b/src/current/v24.3/logical-data-replication-overview.md index 9ce2e733fca..14fdeec0e18 100644 --- a/src/current/v24.3/logical-data-replication-overview.md +++ b/src/current/v24.3/logical-data-replication-overview.md @@ -18,14 +18,14 @@ Cockroach Labs also has a [physical cluster replication]({% link {{ page.version You can run LDR in a _unidirectional_ or _bidirectional_ setup to meet different use cases that support: -- [High availability and single-region write latency in two-datacenter deployments](#achieve-high-availability-and-single-region-write-latency-in-two-datacenter-deployments) +- [High availability and single-region write latency in two-datacenter deployments](#achieve-high-availability-and-single-region-write-latency-in-two-datacenter-2dc-deployments) - [Workload isolation between clusters](#achieve-workload-isolation-between-clusters) {{site.data.alerts.callout_info}} For a comparison of CockroachDB high availability and resilience features and tooling, refer to the [Data Resilience]({% link {{ page.version.version }}/data-resilience.md %}) page. {{site.data.alerts.end}} -### Achieve high availability and single-region write latency in two-datacenter deployments +### Achieve high availability and single-region write latency in two-datacenter (2DC) deployments Maintain [high availability]({% link {{ page.version.version }}/data-resilience.md %}#high-availability) and resilience to region failures with a two-datacenter topology. You can run bidirectional LDR to ensure [data resilience]({% link {{ page.version.version }}/data-resilience.md %}) in your deployment, particularly in datacenter or region failures. If you set up two single-region clusters, in LDR, both clusters can receive application reads and writes with low, single-region write latency. Then, in a datacenter, region, or cluster outage, you can redirect application traffic to the surviving cluster with [low downtime]({% link {{ page.version.version }}/data-resilience.md %}#high-availability). In the following diagram, the two single-region clusters are deployed in US East and West to provide low latency for that region. The two LDR jobs ensure that the tables on both clusters will reach eventual consistency. @@ -40,10 +40,10 @@ Isolate critical application workloads from non-critical application workloads. ## Features - **Table-level replication**: When you initiate LDR, it will replicate all of the source table's existing data to the destination table. From then on, LDR will replicate the source table's data to the destination table to achieve eventual consistency. -- **Last write wins conflict resolution**: LDR uses [_last write wins (LWW)_ conflict resolution]({% link {{ page.version.version }}/manage-logical-data-replication.md %}#conflict-resolution), which will use the latest MVCC timestamp to resolve a conflict in row insertion. +- **Last write wins conflict resolution**: LDR uses [_last write wins (LWW)_ conflict resolution]({% link {{ page.version.version }}/manage-logical-data-replication.md %}#conflict-resolution), which will use the latest [MVCC]({% link {{ page.version.version }}/architecture/storage-layer.md %}#mvcc) timestamp to resolve a conflict in row insertion. - **Dead letter queue (DLQ)**: When LDR starts, the job will create a [DLQ table]({% link {{ page.version.version }}/manage-logical-data-replication.md %}#dead-letter-queue-dlq) with each replicating table in order to track unresolved conflicts. You can interact and manage this table like any other SQL table. - **Replication modes**: LDR offers different _modes_ that apply data differently during replication, which allows you to consider optimizing for throughput or constraints during replication. -- **Monitoring**: To [monitor]({% link {{ page.version.version }}/logical-data-replication-monitoring.md %}) LDR's initial progress, current status, and performance, you can metrics available in the DB Console, Prometheus, and Metrics Export. +- **Monitoring**: To [monitor]({% link {{ page.version.version }}/logical-data-replication-monitoring.md %}) LDR's initial progress, current status, and performance, you can view metrics available in the DB Console, Prometheus, and Metrics Export. ## Get started