Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix 7 typos #542

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion content/cost_optimization/cost_opt_compute.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ The Kubernetes Cluster Autoscaler works by scaling groups of nodes — called a

You can have multiple node groups and the Cluster Autoscaler can be configured to set priority scaling levels and each node group can contain different sized nodes. Node groups can have different capacity types and the priority expander can be used to scale less expensive groups first.

Below is an example of a snippet of cluster configuration that uses a `ConfigMap`` to prioritize reserved capacity before using on-demand instances. You can use the same technique to prioritize Graviton or Spot Instances over other types.
Below is an example of a snippet of cluster configuration that uses a `ConfigMap` to prioritize reserved capacity before using on-demand instances. You can use the same technique to prioritize Graviton or Spot Instances over other types.

```yaml
apiVersion: eksctl.io/v1alpha5
Expand Down
2 changes: 1 addition & 1 deletion content/scalability/docs/quotas.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ You can review the EC2 rate limit defaults and the steps to request a rate limit

* Some [Nitro instance types have a volume attachment limit of 28](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#instance-type-volume-limits) that is shared between Amazon EBS volumes, network interfaces, and NVMe instance store volumes. If your workloads are mounting numerous EBS volumes you may encounter limits to the pod density you can achieve with these instance types

* There is a maximum number of connections that can be tracked per Ec2 instance. [If your workloads are handling a large number of connections you may see communication failures or errors because this maximum has been hit.](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html#connection-tracking-throttling) You can use the `conntrack_allowance_available` and `conntrack_allowance_exceeded` [network performance metrics to monitor the number of tracked connections on your EKS worker nodes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-network-performance-ena.html).
* There is a maximum number of connections that can be tracked per EC2 instance. [If your workloads are handling a large number of connections you may see communication failures or errors because this maximum has been hit.](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html#connection-tracking-throttling) You can use the `conntrack_allowance_available` and `conntrack_allowance_exceeded` [network performance metrics to monitor the number of tracked connections on your EKS worker nodes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-network-performance-ena.html).


* In EKS environment, etcd storage limit is **8 GiB** as per [upstream guidance](https://etcd.io/docs/v3.5/dev-guide/limit/#storage-size-limit). Please monitor metric `etcd_db_total_size_in_bytes` to track etcd db size. You can refer to [alert rules](https://github.com/etcd-io/etcd/blob/main/contrib/mixin/mixin.libsonnet#L213-L240) `etcdBackendQuotaLowSpace` and `etcdExcessiveDatabaseGrowth` to setup this monitoring.
12 changes: 6 additions & 6 deletions content/upgrades/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Additionally, review the upstream [Kubernetes release information](https://kuber

## Understand how the shared responsibility model applies to cluster upgrades

You are responsible for initiating upgrade for both cluster control plane as well as the data plane. [Learn how to initiate an upgrade.](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html) When you initiate a cluster upgrade, AWS manages upgrading the cluster control plane. You are responsible for upgrading the data plane, including Fargate pods and [other add-ons.](#upgrade-add-ons-and-components-using-the-kubernetes-api) You must validate and plan upgrades for workloads running on your cluster to ensure their availability and operations are not impacted after cluster upgrade
You are responsible for initiating upgrade for both cluster control plane as well as the data plane. [Learn how to initiate an upgrade.](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html) When you initiate a cluster upgrade, AWS manages upgrading the cluster control plane. You are responsible for upgrading the data plane, including Fargate pods and [other add-ons.](#upgrade-add-ons-and-components-using-the-kubernetes-api) You must validate and plan upgrades for workloads running on your cluster to ensure their availability and operations are not impacted after cluster upgrade.

## Upgrade clusters in-place

Expand Down Expand Up @@ -174,8 +174,8 @@ Amazon EKS automatically installs add-ons such as the Amazon VPC CNI plugin for
You can use Amazon EKS Add-ons to update versions with a single command. For Example:

```
aws eks update-addon cluster-name my-cluster addon-name vpc-cni addon-version version-number \
--service-account-role-arn arn:aws:iam::111122223333:role/role-name configuration-values '{}' resolve-conflicts PRESERVE
aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version version-number \
--service-account-role-arn arn:aws:iam::111122223333:role/role-name --configuration-values '{}' --resolve-conflicts PRESERVE
```

Check if you have any EKS Add-ons with:
Expand Down Expand Up @@ -440,12 +440,12 @@ Before proceeding with a Kubernetes upgrade in Amazon EKS, it's vital to ensure

Karpenter’s [Drift](https://karpenter.sh/docs/concepts/disruption/#drift) can automatically upgrade the Karpenter-provisioned nodes to stay in-sync with the EKS control plane. Refer to [How to upgrade an EKS Cluster with Karpenter](https://karpenter.sh/docs/faq/#how-do-i-upgrade-an-eks-cluster-with-karpenter) for more details.

This means that if the AMI ID specified in the Karpenter EC2 Nodeclass is updated, Karpenter will detect the drift and start replacing the nodes with the new AMI.
This means that if the AMI ID specified in the Karpenter EC2NodeClass is updated, Karpenter will detect the drift and start replacing the nodes with the new AMI.
To understand how Karpenter manages AMIs and the different options available to Karpenter users to control the AMI upgrade process see the documentation on [how to manage AMIs in Karpenter](https://karpenter.sh/docs/tasks/managing-amis/).

## Use ExpireAfter for Karpenter managed nodes

Karpenter will mark nodes as expired and disrupt them after they have lived the duration specified in `spec.disruption.expireAfter. This node expiry helps to reduce security vulnerabilities and issues that can arise from long-running nodes, such as file fragmentation or memory leaks. When you set a value for expireAfter in your NodePool, this activates node expiry. For more information, see [Disruption](https://karpenter.sh/docs/concepts/disruption/#methods) on the Karpenter website.
Karpenter will mark nodes as expired and disrupt them after they have lived the duration specified in `spec.disruption.expireAfter`. This node expiry helps to reduce security vulnerabilities and issues that can arise from long-running nodes, such as file fragmentation or memory leaks. When you set a value for expireAfter in your NodePool, this activates node expiry. For more information, see [Disruption](https://karpenter.sh/docs/concepts/disruption/#methods) on the Karpenter website.

If you're using automatic AMI upgrades, ExpireAfter can periodically refresh and upgrade your nodes.

Expand Down Expand Up @@ -484,7 +484,7 @@ Benefits include:

* Possible to change multiple EKS versions at once (e.g. 1.23 to 1.25)
* Able to switch back to the old cluster
* Creates a new cluster which may be managed with newer systems (e.g. terraform)
* Creates a new cluster which may be managed with newer systems (e.g. Terraform)
* Workloads can be migrated individually

Some downsides include:
Expand Down