From a66a825502b9eb1e29203e3564647da115f1195e Mon Sep 17 00:00:00 2001 From: Ryan Kuo <8740013+taroface@users.noreply.github.com> Date: Mon, 6 Jan 2025 14:40:16 -0500 Subject: [PATCH] remove confusing disk IOPs guidance (#19281) --- src/current/v21.2/common-issues-to-monitor.md | 4 ---- src/current/v22.1/common-issues-to-monitor.md | 4 ---- src/current/v22.2/common-issues-to-monitor.md | 4 ---- src/current/v23.1/common-issues-to-monitor.md | 4 ---- src/current/v23.2/common-issues-to-monitor.md | 4 ---- src/current/v24.1/common-issues-to-monitor.md | 4 ---- src/current/v24.2/common-issues-to-monitor.md | 4 ---- src/current/v24.3/common-issues-to-monitor.md | 4 ---- src/current/v25.1/common-issues-to-monitor.md | 4 ---- 9 files changed, 36 deletions(-) diff --git a/src/current/v21.2/common-issues-to-monitor.md b/src/current/v21.2/common-issues-to-monitor.md index 1f3ad27227e..1dc10537705 100644 --- a/src/current/v21.2/common-issues-to-monitor.md +++ b/src/current/v21.2/common-issues-to-monitor.md @@ -257,10 +257,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot - The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured. -{{site.data.alerts.callout_success}} -Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency. -{{site.data.alerts.end}} - With insufficient disk I/O, you may also see: - Degradation in [SQL response time](#service-latency). diff --git a/src/current/v22.1/common-issues-to-monitor.md b/src/current/v22.1/common-issues-to-monitor.md index 2f5f1e076ae..681450e1f18 100644 --- a/src/current/v22.1/common-issues-to-monitor.md +++ b/src/current/v22.1/common-issues-to-monitor.md @@ -261,10 +261,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot - The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured. -{{site.data.alerts.callout_success}} -Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency. -{{site.data.alerts.end}} - With insufficient disk I/O, you may also see: - Degradation in [SQL response time](#service-latency). diff --git a/src/current/v22.2/common-issues-to-monitor.md b/src/current/v22.2/common-issues-to-monitor.md index f6010aaffdf..cfcf93f5a9f 100644 --- a/src/current/v22.2/common-issues-to-monitor.md +++ b/src/current/v22.2/common-issues-to-monitor.md @@ -287,10 +287,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot - The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured. -{{site.data.alerts.callout_success}} -Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency. -{{site.data.alerts.end}} - With insufficient disk I/O, you may also see: - Degradation in [SQL response time](#service-latency). diff --git a/src/current/v23.1/common-issues-to-monitor.md b/src/current/v23.1/common-issues-to-monitor.md index 80cdcec50d4..2f72c5ea3d1 100644 --- a/src/current/v23.1/common-issues-to-monitor.md +++ b/src/current/v23.1/common-issues-to-monitor.md @@ -293,10 +293,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot - The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured. -{{site.data.alerts.callout_success}} -Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency. -{{site.data.alerts.end}} - With insufficient disk I/O, you may also see: - Degradation in [SQL response time](#service-latency). diff --git a/src/current/v23.2/common-issues-to-monitor.md b/src/current/v23.2/common-issues-to-monitor.md index 2a0c45f46bb..884d4367870 100644 --- a/src/current/v23.2/common-issues-to-monitor.md +++ b/src/current/v23.2/common-issues-to-monitor.md @@ -297,10 +297,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot - The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured. -{{site.data.alerts.callout_success}} -Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency. -{{site.data.alerts.end}} - With insufficient disk I/O, you may also see: - Degradation in [SQL response time](#service-latency). diff --git a/src/current/v24.1/common-issues-to-monitor.md b/src/current/v24.1/common-issues-to-monitor.md index 239ab6f9a37..369bdc8dbf5 100644 --- a/src/current/v24.1/common-issues-to-monitor.md +++ b/src/current/v24.1/common-issues-to-monitor.md @@ -297,10 +297,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot - The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured. -{{site.data.alerts.callout_success}} -Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency. -{{site.data.alerts.end}} - With insufficient disk I/O, you may also see: - Degradation in [SQL response time](#service-latency). diff --git a/src/current/v24.2/common-issues-to-monitor.md b/src/current/v24.2/common-issues-to-monitor.md index b937fefd77b..cfe4209c74c 100644 --- a/src/current/v24.2/common-issues-to-monitor.md +++ b/src/current/v24.2/common-issues-to-monitor.md @@ -297,10 +297,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot - The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured. -{{site.data.alerts.callout_success}} -Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency. -{{site.data.alerts.end}} - With insufficient disk I/O, you may also see: - Degradation in [SQL response time](#service-latency). diff --git a/src/current/v24.3/common-issues-to-monitor.md b/src/current/v24.3/common-issues-to-monitor.md index f76275ce585..9b6ee3e3ef8 100644 --- a/src/current/v24.3/common-issues-to-monitor.md +++ b/src/current/v24.3/common-issues-to-monitor.md @@ -297,10 +297,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot - The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured. -{{site.data.alerts.callout_success}} -Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency. -{{site.data.alerts.end}} - With insufficient disk I/O, you may also see: - Degradation in [SQL response time](#service-latency). diff --git a/src/current/v25.1/common-issues-to-monitor.md b/src/current/v25.1/common-issues-to-monitor.md index f76275ce585..9b6ee3e3ef8 100644 --- a/src/current/v25.1/common-issues-to-monitor.md +++ b/src/current/v25.1/common-issues-to-monitor.md @@ -297,10 +297,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot - The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured. -{{site.data.alerts.callout_success}} -Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency. -{{site.data.alerts.end}} - With insufficient disk I/O, you may also see: - Degradation in [SQL response time](#service-latency).