From d0031aa467b36d85ba101c97f17d4cf3364dbed7 Mon Sep 17 00:00:00 2001 From: wafuwafu13 Date: Mon, 8 Jul 2024 17:22:02 +0100 Subject: [PATCH] Fix 7 typos --- content/cost_optimization/cost_opt_compute.md | 2 +- content/scalability/docs/quotas.md | 2 +- content/upgrades/index.md | 12 ++++++------ 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/content/cost_optimization/cost_opt_compute.md b/content/cost_optimization/cost_opt_compute.md index e05d239eb..e9852bf03 100644 --- a/content/cost_optimization/cost_opt_compute.md +++ b/content/cost_optimization/cost_opt_compute.md @@ -55,7 +55,7 @@ The Kubernetes Cluster Autoscaler works by scaling groups of nodes — called a You can have multiple node groups and the Cluster Autoscaler can be configured to set priority scaling levels and each node group can contain different sized nodes. Node groups can have different capacity types and the priority expander can be used to scale less expensive groups first. -Below is an example of a snippet of cluster configuration that uses a `ConfigMap`` to prioritize reserved capacity before using on-demand instances. You can use the same technique to prioritize Graviton or Spot Instances over other types. +Below is an example of a snippet of cluster configuration that uses a `ConfigMap` to prioritize reserved capacity before using on-demand instances. You can use the same technique to prioritize Graviton or Spot Instances over other types. ```yaml apiVersion: eksctl.io/v1alpha5 diff --git a/content/scalability/docs/quotas.md b/content/scalability/docs/quotas.md index 3b8c435ae..63ed01e12 100644 --- a/content/scalability/docs/quotas.md +++ b/content/scalability/docs/quotas.md @@ -79,7 +79,7 @@ You can review the EC2 rate limit defaults and the steps to request a rate limit * Some [Nitro instance types have a volume attachment limit of 28](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#instance-type-volume-limits) that is shared between Amazon EBS volumes, network interfaces, and NVMe instance store volumes. If your workloads are mounting numerous EBS volumes you may encounter limits to the pod density you can achieve with these instance types -* There is a maximum number of connections that can be tracked per Ec2 instance. [If your workloads are handling a large number of connections you may see communication failures or errors because this maximum has been hit.](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html#connection-tracking-throttling) You can use the `conntrack_allowance_available` and `conntrack_allowance_exceeded` [network performance metrics to monitor the number of tracked connections on your EKS worker nodes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-network-performance-ena.html). +* There is a maximum number of connections that can be tracked per EC2 instance. [If your workloads are handling a large number of connections you may see communication failures or errors because this maximum has been hit.](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html#connection-tracking-throttling) You can use the `conntrack_allowance_available` and `conntrack_allowance_exceeded` [network performance metrics to monitor the number of tracked connections on your EKS worker nodes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-network-performance-ena.html). * In EKS environment, etcd storage limit is **8 GiB** as per [upstream guidance](https://etcd.io/docs/v3.5/dev-guide/limit/#storage-size-limit). Please monitor metric `etcd_db_total_size_in_bytes` to track etcd db size. You can refer to [alert rules](https://github.com/etcd-io/etcd/blob/main/contrib/mixin/mixin.libsonnet#L213-L240) `etcdBackendQuotaLowSpace` and `etcdExcessiveDatabaseGrowth` to setup this monitoring. diff --git a/content/upgrades/index.md b/content/upgrades/index.md index 587890ea2..8c2550a43 100644 --- a/content/upgrades/index.md +++ b/content/upgrades/index.md @@ -37,7 +37,7 @@ Additionally, review the upstream [Kubernetes release information](https://kuber ## Understand how the shared responsibility model applies to cluster upgrades -You are responsible for initiating upgrade for both cluster control plane as well as the data plane. [Learn how to initiate an upgrade.](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html) When you initiate a cluster upgrade, AWS manages upgrading the cluster control plane. You are responsible for upgrading the data plane, including Fargate pods and [other add-ons.](#upgrade-add-ons-and-components-using-the-kubernetes-api) You must validate and plan upgrades for workloads running on your cluster to ensure their availability and operations are not impacted after cluster upgrade +You are responsible for initiating upgrade for both cluster control plane as well as the data plane. [Learn how to initiate an upgrade.](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html) When you initiate a cluster upgrade, AWS manages upgrading the cluster control plane. You are responsible for upgrading the data plane, including Fargate pods and [other add-ons.](#upgrade-add-ons-and-components-using-the-kubernetes-api) You must validate and plan upgrades for workloads running on your cluster to ensure their availability and operations are not impacted after cluster upgrade. ## Upgrade clusters in-place @@ -174,8 +174,8 @@ Amazon EKS automatically installs add-ons such as the Amazon VPC CNI plugin for You can use Amazon EKS Add-ons to update versions with a single command. For Example: ``` -aws eks update-addon —cluster-name my-cluster —addon-name vpc-cni —addon-version version-number \ ---service-account-role-arn arn:aws:iam::111122223333:role/role-name —configuration-values '{}' —resolve-conflicts PRESERVE +aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version version-number \ +--service-account-role-arn arn:aws:iam::111122223333:role/role-name --configuration-values '{}' --resolve-conflicts PRESERVE ``` Check if you have any EKS Add-ons with: @@ -440,12 +440,12 @@ Before proceeding with a Kubernetes upgrade in Amazon EKS, it's vital to ensure Karpenter’s [Drift](https://karpenter.sh/docs/concepts/disruption/#drift) can automatically upgrade the Karpenter-provisioned nodes to stay in-sync with the EKS control plane. Refer to [How to upgrade an EKS Cluster with Karpenter](https://karpenter.sh/docs/faq/#how-do-i-upgrade-an-eks-cluster-with-karpenter) for more details. -This means that if the AMI ID specified in the Karpenter EC2 Nodeclass is updated, Karpenter will detect the drift and start replacing the nodes with the new AMI. +This means that if the AMI ID specified in the Karpenter EC2NodeClass is updated, Karpenter will detect the drift and start replacing the nodes with the new AMI. To understand how Karpenter manages AMIs and the different options available to Karpenter users to control the AMI upgrade process see the documentation on [how to manage AMIs in Karpenter](https://karpenter.sh/docs/tasks/managing-amis/). ## Use ExpireAfter for Karpenter managed nodes -Karpenter will mark nodes as expired and disrupt them after they have lived the duration specified in `spec.disruption.expireAfter. This node expiry helps to reduce security vulnerabilities and issues that can arise from long-running nodes, such as file fragmentation or memory leaks. When you set a value for expireAfter in your NodePool, this activates node expiry. For more information, see [Disruption](https://karpenter.sh/docs/concepts/disruption/#methods) on the Karpenter website. +Karpenter will mark nodes as expired and disrupt them after they have lived the duration specified in `spec.disruption.expireAfter`. This node expiry helps to reduce security vulnerabilities and issues that can arise from long-running nodes, such as file fragmentation or memory leaks. When you set a value for expireAfter in your NodePool, this activates node expiry. For more information, see [Disruption](https://karpenter.sh/docs/concepts/disruption/#methods) on the Karpenter website. If you're using automatic AMI upgrades, ExpireAfter can periodically refresh and upgrade your nodes. @@ -484,7 +484,7 @@ Benefits include: * Possible to change multiple EKS versions at once (e.g. 1.23 to 1.25) * Able to switch back to the old cluster -* Creates a new cluster which may be managed with newer systems (e.g. terraform) +* Creates a new cluster which may be managed with newer systems (e.g. Terraform) * Workloads can be migrated individually Some downsides include: