Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix typos using https://crates.io/crates/typos-cli #112

Merged
merged 1 commit into from
Jan 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions charts/karpenter/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -187,13 +187,13 @@ spec:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
# The template below patches the .Values.affinity to add a default label selector where not specificed
# The template below patches the .Values.affinity to add a default label selector where not specified
{{- $_ := include "karpenter.patchAffinity" $ }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.topologySpreadConstraints }}
# The template below patches the .Values.topologySpreadConstraints to add a default label selector where not specificed
# The template below patches the .Values.topologySpreadConstraints to add a default label selector where not specified
{{- $_ := include "karpenter.patchTopologySpreadConstraints" $ }}
topologySpreadConstraints:
{{- toYaml . | nindent 8 }}
Expand Down
2 changes: 1 addition & 1 deletion designs/aks-node-bootstrap.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ Karpenter also supports provider-specific configuration via `NodeTemplate` custo
Note that this represents part of the external configuration surface / API, and should be treated as such.

<!-- TODO: cover NodeTemplate details -->
<!-- TODO: add guidance on what belongs to settins vs NodeTemplate -->
<!-- TODO: add guidance on what belongs to settings vs NodeTemplate -->

### Auto-detected values

Expand Down
6 changes: 3 additions & 3 deletions designs/gpu-selection-and-bootstrap.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ The way we determine these drivers is via trial and error, and there is not a gr
For Converged drivers they are a mix of multiple drivers installing vanilla cuda drivers will fail to install with opaque errors.
nvidia-bug-report.sh may be helpful, but usually it tells you the pci card id is incompatible.

So manual trial and error, or leveraging other peoples manual trial and error, and published gpu drivers seems to be the prefered method for approaching this.
So manual trial and error, or leveraging other peoples manual trial and error, and published gpu drivers seems to be the preferred method for approaching this.
see https://github.com/Azure/azhpc-extensions/blob/daaefd78df6f27012caf30f3b54c3bd6dc437652/NvidiaGPU/resources.json for the HPC list of skus and converged drivers, and the driver matrix used by HPC

**Ownership:** Node SIG is responsible for ensuring successful and functional installation. Our goal is to share a bootstrap contract, and the oblication of a functional successfully bootstrapped vhd relies on the node sig.
Expand All @@ -95,9 +95,9 @@ The NVIDIA device plugin for Kubernetes is designed to enable GPU support within

We will require the customer to install the nvidia device plugin daemonset to enable GPU support through karpenter.

When a node with Nvidia GPUS joins the cluster, the device plugin detects available gpus and notifies the k8s scheduler that we have a new Allocatable Resource type of `nvidia.com/gpu` along with a resource quanity that can be considered for scheduling.
When a node with Nvidia GPUS joins the cluster, the device plugin detects available gpus and notifies the k8s scheduler that we have a new Allocatable Resource type of `nvidia.com/gpu` along with a resource quantity that can be considered for scheduling.

Note the device plugin is also reponsible for the allocation of that resource and reporting that other pods can not use that resource and marking it as used by changing the allocatable capacity on the node.
Note the device plugin is also responsible for the allocation of that resource and reporting that other pods can not use that resource and marking it as used by changing the allocatable capacity on the node.

## Changes to Requirements API

Expand Down
2 changes: 1 addition & 1 deletion designs/k8s-node-image-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -310,5 +310,5 @@ From template:
design doc by carefully reviewing it or assigning a tech leads that
are domain expert in that SIG to review and approve this doc

[^6]: Q&A style meeting notes from desgin review meeting to capture
[^6]: Q&A style meeting notes from design review meeting to capture
todos
2 changes: 1 addition & 1 deletion pkg/fake/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ type MockHandler[T any] struct {
err error
}

// Done returns true if the LRO has reached a terminal state. TrivialHanlder is always done.
// Done returns true if the LRO has reached a terminal state. TrivialHandler is always done.
func (h MockHandler[T]) Done() bool {
return true
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/providers/imagefamily/resolver.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ func (r Resolver) Resolve(ctx context.Context, nodeClass *v1alpha2.AKSNodeClass,
kubeletConfig = &corev1beta1.KubeletConfiguration{}
}

// TODO: revist computeResources and maxPods implementation
// TODO: revisit computeResources and maxPods implementation
kubeletConfig.KubeReserved = instanceType.Overhead.KubeReserved
kubeletConfig.SystemReserved = instanceType.Overhead.SystemReserved
kubeletConfig.EvictionHard = map[string]string{
Expand Down
4 changes: 2 additions & 2 deletions pkg/providers/instancetype/suite_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -787,9 +787,9 @@ var _ = Describe("InstanceType Provider", func() {
Expect(ok).To(BeTrue(), "Expected nvidia.com/gpu to be present in capacity")
Expect(gpuQuantity.Value()).To(Equal(int64(1)))

gpuQuanityNonGPU, ok := normalNode.Capacity["nvidia.com/gpu"]
gpuQuantityNonGPU, ok := normalNode.Capacity["nvidia.com/gpu"]
Expect(ok).To(BeTrue(), "Expected nvidia.com/gpu to be present in capacity, and be zero")
Expect(gpuQuanityNonGPU.Value()).To(Equal(int64(0)))
Expect(gpuQuantityNonGPU.Value()).To(Equal(int64(0)))
})
})

Expand Down
2 changes: 1 addition & 1 deletion pkg/providers/pricing/pricing.go
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ func (p *Provider) updatePricing(ctx context.Context) {
prices := map[client.Item]bool{}
err := p.fetchPricing(ctx, processPage(prices))
if err != nil {
logging.FromContext(ctx).Errorf("error featching updated pricing for region %s, %s, using existing pricing data, on-demand: %s, spot: %s", p.region, err, err.lastOnDemandUpdateTime.Format(time.RFC3339), err.lastSpotUpdateTime.Format(time.RFC3339))
logging.FromContext(ctx).Errorf("error fetching updated pricing for region %s, %s, using existing pricing data, on-demand: %s, spot: %s", p.region, err, err.lastOnDemandUpdateTime.Format(time.RFC3339), err.lastSpotUpdateTime.Format(time.RFC3339))
return
}

Expand Down
8 changes: 4 additions & 4 deletions test/pkg/debug/events.go
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ func (c *EventClient) dumpKarpenterEvents(ctx context.Context) error {
if err := c.kubeClient.List(ctx, el, client.InNamespace("karpenter")); err != nil {
return err
}
for k, v := range coallateEvents(filterTestEvents(el.Items, c.start)) {
for k, v := range collateEvents(filterTestEvents(el.Items, c.start)) {
fmt.Print(getEventInformation(k, v))
}
return nil
Expand All @@ -71,7 +71,7 @@ func (c *EventClient) dumpPodEvents(ctx context.Context) error {
events := lo.Filter(filterTestEvents(el.Items, c.start), func(e v1.Event, _ int) bool {
return e.InvolvedObject.Namespace != "kube-system"
})
for k, v := range coallateEvents(events) {
for k, v := range collateEvents(events) {
fmt.Print(getEventInformation(k, v))
}
return nil
Expand All @@ -84,7 +84,7 @@ func (c *EventClient) dumpNodeEvents(ctx context.Context) error {
}); err != nil {
return err
}
for k, v := range coallateEvents(filterTestEvents(el.Items, c.start)) {
for k, v := range collateEvents(filterTestEvents(el.Items, c.start)) {
fmt.Print(getEventInformation(k, v))
}
return nil
Expand All @@ -103,7 +103,7 @@ func filterTestEvents(events []v1.Event, startTime time.Time) []v1.Event {
})
}

func coallateEvents(events []v1.Event) map[v1.ObjectReference]*v1.EventList {
func collateEvents(events []v1.Event) map[v1.ObjectReference]*v1.EventList {
eventMap := map[v1.ObjectReference]*v1.EventList{}
for i := range events {
elem := events[i]
Expand Down
Loading