Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into v23.2.19
Browse files Browse the repository at this point in the history
  • Loading branch information
florence-crl committed Jan 6, 2025
2 parents 855d6f4 + a66a825 commit 17f7b50
Show file tree
Hide file tree
Showing 9 changed files with 0 additions and 36 deletions.
4 changes: 0 additions & 4 deletions src/current/v21.2/common-issues-to-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -257,10 +257,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot

- The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured.

{{site.data.alerts.callout_success}}
Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency.
{{site.data.alerts.end}}

With insufficient disk I/O, you may also see:

- Degradation in [SQL response time](#service-latency).
Expand Down
4 changes: 0 additions & 4 deletions src/current/v22.1/common-issues-to-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,10 +261,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot

- The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured.

{{site.data.alerts.callout_success}}
Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency.
{{site.data.alerts.end}}

With insufficient disk I/O, you may also see:

- Degradation in [SQL response time](#service-latency).
Expand Down
4 changes: 0 additions & 4 deletions src/current/v22.2/common-issues-to-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,10 +287,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot

- The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured.

{{site.data.alerts.callout_success}}
Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency.
{{site.data.alerts.end}}

With insufficient disk I/O, you may also see:

- Degradation in [SQL response time](#service-latency).
Expand Down
4 changes: 0 additions & 4 deletions src/current/v23.1/common-issues-to-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -293,10 +293,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot

- The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured.

{{site.data.alerts.callout_success}}
Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency.
{{site.data.alerts.end}}

With insufficient disk I/O, you may also see:

- Degradation in [SQL response time](#service-latency).
Expand Down
4 changes: 0 additions & 4 deletions src/current/v23.2/common-issues-to-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,10 +297,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot

- The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured.

{{site.data.alerts.callout_success}}
Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency.
{{site.data.alerts.end}}

With insufficient disk I/O, you may also see:

- Degradation in [SQL response time](#service-latency).
Expand Down
4 changes: 0 additions & 4 deletions src/current/v24.1/common-issues-to-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,10 +297,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot

- The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured.

{{site.data.alerts.callout_success}}
Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency.
{{site.data.alerts.end}}

With insufficient disk I/O, you may also see:

- Degradation in [SQL response time](#service-latency).
Expand Down
4 changes: 0 additions & 4 deletions src/current/v24.2/common-issues-to-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,10 +297,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot

- The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured.

{{site.data.alerts.callout_success}}
Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency.
{{site.data.alerts.end}}

With insufficient disk I/O, you may also see:

- Degradation in [SQL response time](#service-latency).
Expand Down
4 changes: 0 additions & 4 deletions src/current/v24.3/common-issues-to-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,10 +297,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot

- The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured.

{{site.data.alerts.callout_success}}
Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency.
{{site.data.alerts.end}}

With insufficient disk I/O, you may also see:

- Degradation in [SQL response time](#service-latency).
Expand Down
4 changes: 0 additions & 4 deletions src/current/v25.1/common-issues-to-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,10 +297,6 @@ Insufficient disk I/O can cause [poor SQL performance](#service-latency) and pot

- The Linux tool `iostat` (part of `sysstat`) can be used to monitor IOPS. In the device status output, `avgqu-sz` corresponds to the **Disk Ops In Progress** metric. If service times persist in double digits on any node, this means that your storage device is saturated and is likely under-provisioned or misconfigured.

{{site.data.alerts.callout_success}}
Ensure that you [properly configure storage](#storage-and-disk-monitoring) to prevent I/O bottlenecks. Afterward, if service times consistently exceed 1-5 ms, you can add more devices or expand the cluster to reduce the disk latency.
{{site.data.alerts.end}}

With insufficient disk I/O, you may also see:

- Degradation in [SQL response time](#service-latency).
Expand Down

0 comments on commit 17f7b50

Please sign in to comment.