Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(k8sd): Add datastore and nodeTaints to the GetClusterConfig response #1065

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs/canonicalk8s/_parts/bootstrap_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,16 @@ Configuration options for the network feature.
Determines if the feature should be enabled.
If omitted defaults to `true`

### cluster-config.network.pod-cidr
**Type:** `string`<br>

PodCIDR is the CIDR range for the pods in the cluster.

### cluster-config.network.service-cidr
**Type:** `string`<br>

ServiceCIDR is the CIDR range for the services in the cluster.

### cluster-config.dns
**Type:** `object`<br>

Expand Down
13 changes: 13 additions & 0 deletions src/k8s/cmd/k8s/k8s_get.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import (
apiv1 "github.com/canonical/k8s-snap-api/api/v1"
cmdutil "github.com/canonical/k8s/cmd/util"
"github.com/canonical/k8s/pkg/k8sd/features"
snaputil "github.com/canonical/k8s/pkg/snap/util"
"github.com/spf13/cobra"
)

Expand Down Expand Up @@ -49,6 +50,18 @@ func newGetCmd(env cmdutil.ExecutionEnvironment) *cobra.Command {
return
}

isWorker, err := snaputil.IsWorker(env.Snap)
if err != nil {
cmd.PrintErrf("Error: failed to check if the node is a worker: %v\n", err)
env.Exit(1)
return
}
if isWorker {
cmd.PrintErrln("Error: this command must be run on the control-plane node")
env.Exit(1)
return
}

response, err := client.GetClusterConfig(ctx)
if err != nil {
cmd.PrintErrf("Error: Failed to get the current cluster configuration.\n\nThe error was: %v\n", err)
Expand Down
13 changes: 13 additions & 0 deletions src/k8s/cmd/k8s/k8s_x_snapd_config.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import (
"time"

cmdutil "github.com/canonical/k8s/cmd/util"
snaputil "github.com/canonical/k8s/pkg/snap/util"
"github.com/canonical/k8s/pkg/utils/control"
"github.com/canonical/k8s/pkg/utils/experimental/snapdconfig"
"github.com/spf13/cobra"
Expand Down Expand Up @@ -75,6 +76,18 @@ func newXSnapdConfigCmd(env cmdutil.ExecutionEnvironment) *cobra.Command {

switch mode.Orb {
case "k8sd":
isWorker, err := snaputil.IsWorker(env.Snap)
if err != nil {
cmd.PrintErrf("Error: failed to check if the node is a worker: %v\n", err)
env.Exit(1)
return
}
if isWorker {
cmd.PrintErrln("Error: this command must be run on the control-plane node")
env.Exit(1)
return
}

response, err := client.GetClusterConfig(cmd.Context())
if err != nil {
cmd.PrintErrf("Error: failed to retrieve cluster configuration: %v\n", err)
Expand Down
2 changes: 1 addition & 1 deletion src/k8s/go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ toolchain go1.23.4
require (
dario.cat/mergo v1.0.0
github.com/canonical/go-dqlite/v2 v2.0.0
github.com/canonical/k8s-snap-api v1.0.18
github.com/canonical/k8s-snap-api v1.0.19-0.20250224075759-19fcc0e0a212
github.com/canonical/lxd v0.0.0-20250113143058-52441d41dab7
github.com/canonical/microcluster/v2 v2.1.1-0.20250127104725-631889214b18
github.com/go-logr/logr v1.4.2
Expand Down
4 changes: 2 additions & 2 deletions src/k8s/go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,8 @@ github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 h1:nvj0OLI3YqYXe
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
github.com/canonical/go-dqlite/v2 v2.0.0 h1:RNFcFVhHMh70muKKErbW35rSzqmAFswheHdAgxW0Ddw=
github.com/canonical/go-dqlite/v2 v2.0.0/go.mod h1:IaIC8u4Z1UmPjuAqPzA2r83YMaMHRLoKZdHKI5uHCJI=
github.com/canonical/k8s-snap-api v1.0.18 h1:wjwv+F0nPJF3GPGo86SuIXXIJ0fnyRdOjSNiZwi72iY=
github.com/canonical/k8s-snap-api v1.0.18/go.mod h1:kdXBgGo5TF93NJYHfa1bfKIzEIgE1oQriFHcVoVQUX8=
github.com/canonical/k8s-snap-api v1.0.19-0.20250224075759-19fcc0e0a212 h1:7gVCKwBFzmAiNJ8k5YYhqkrhTpomeeplpbZmBtGr3tg=
github.com/canonical/k8s-snap-api v1.0.19-0.20250224075759-19fcc0e0a212/go.mod h1:kdXBgGo5TF93NJYHfa1bfKIzEIgE1oQriFHcVoVQUX8=
github.com/canonical/lxd v0.0.0-20250113143058-52441d41dab7 h1:lZCOt9/1KowNdnWXjfA1/51Uj7+R0fKtByos9EVrYn4=
github.com/canonical/lxd v0.0.0-20250113143058-52441d41dab7/go.mod h1:4Ssm3YxIz8wyazciTLDR9V0aR2GPlGIHb+S0182T5pA=
github.com/canonical/microcluster/v2 v2.1.1-0.20250127104725-631889214b18 h1:h5VJaUnE4gAKPolBTJ11HMRTEN5JyA+oR4gHkoK//6o=
Expand Down
7 changes: 6 additions & 1 deletion src/k8s/pkg/docgen/godoc.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (
"go/doc"
"go/parser"
"go/token"
"io/fs"
"reflect"
"strings"
)
Expand Down Expand Up @@ -49,7 +50,11 @@ func getStructTypeFromDoc(packageDoc *doc.Package, structName string) (*ast.Stru

func parsePackageDir(packageDir string) (*ast.Package, error) {
fset := token.NewFileSet()
packages, err := parser.ParseDir(fset, packageDir, nil, parser.ParseComments)
// NOTE(Hue): We only want to parse non-test files.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On my local machine, the make go.doc basically deleted all the generated docs due to multiple packages error. I don't know how this seems to be working on the CI. Had to make this change to be able to run locally. I think this issue was due to a very recent change in k8s-snap-api where we added a test file: https://github.com/canonical/k8s-snap-api/pull/26/files

nonTestPackagesFilter := func(info fs.FileInfo) bool {
return !strings.HasSuffix(info.Name(), "_test.go")
}
packages, err := parser.ParseDir(fset, packageDir, nonTestPackagesFilter, parser.ParseComments)
if err != nil {
return nil, fmt.Errorf("couldn't parse go package: %s", packageDir)
}
Expand Down
18 changes: 17 additions & 1 deletion src/k8s/pkg/k8sd/api/cluster_config.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import (
"github.com/canonical/k8s/pkg/k8sd/database"
databaseutil "github.com/canonical/k8s/pkg/k8sd/database/util"
"github.com/canonical/k8s/pkg/k8sd/types"
snaputil "github.com/canonical/k8s/pkg/snap/util"
"github.com/canonical/k8s/pkg/utils"
"github.com/canonical/lxd/lxd/response"
"github.com/canonical/microcluster/v2/state"
Expand Down Expand Up @@ -59,7 +60,22 @@ func (e *Endpoints) getClusterConfig(s state.State, r *http.Request) response.Re
return response.InternalError(fmt.Errorf("failed to retrieve cluster configuration: %w", err))
}

var nodeTaints *[]string
snap := e.provider.Snap()
isWorker, err := snaputil.IsWorker(snap)
if err != nil {
return response.InternalError(fmt.Errorf("failed to check if node is a worker: %w", err))
}

if isWorker {
nodeTaints = config.Kubelet.WorkerTaints
} else {
nodeTaints = config.Kubelet.ControlPlaneTaints
}

return response.SyncResponse(true, &apiv1.GetClusterConfigResponse{
Config: config.ToUserFacing(),
Config: config.ToUserFacing(),
Datastore: config.Datastore.ToUserFacing(),
NodeTaints: nodeTaints,
})
}
2 changes: 1 addition & 1 deletion src/k8s/pkg/k8sd/api/endpoints.go
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ func (e *Endpoints) Endpoints() []rest.Endpoint {
Name: "ClusterConfig",
Path: apiv1.GetClusterConfigRPC, // == apiv1.SetClusterConfigRPC
Put: rest.EndpointAction{Handler: e.putClusterConfig, AccessHandler: e.restrictWorkers},
Get: rest.EndpointAction{Handler: e.getClusterConfig, AccessHandler: e.restrictWorkers},
Get: rest.EndpointAction{Handler: e.getClusterConfig},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this affect k8s get? Do we still see this command is not available on workers message?

Copy link
Contributor Author

@HomayoonAlimohammadi HomayoonAlimohammadi Feb 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a really good catch. I'll add an isWorker check.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added this check in k8s_get.go and k8s_x_snapd_config.go.

},
// Kubernetes auth tokens and token review webhook for kube-apiserver
{
Expand Down
19 changes: 18 additions & 1 deletion src/k8s/pkg/k8sd/app/hooks_bootstrap.go
Original file line number Diff line number Diff line change
Expand Up @@ -205,6 +205,13 @@ func (a *App) onBootstrapWorkerNode(ctx context.Context, s state.State, encodedT
return fmt.Errorf("failed to generate kube-proxy kubeconfig: %w", err)
}

// NOTE(Hue): This is how the taints are set for the worker nodes in the charm.
// https://github.com/canonical/k8s-operator/blob/bd9ebbda153053f9bfd6e66a93d2afb629a6cfd8/charms/worker/k8s/src/config/extra_args.py#L89
var taints []string
if taintsStr, ok := joinConfig.ExtraNodeKubeletArgs["--register-with-taints"]; ok && taintsStr != nil {
taints = strings.Split(*taintsStr, ",")
}

// Write worker node configuration to dqlite
//
// Worker nodes only use a subset of the ClusterConfig struct. At the moment, these are:
Expand All @@ -231,6 +238,16 @@ func (a *App) onBootstrapWorkerNode(ctx context.Context, s state.State, encodedT
Annotations: response.Annotations,
}

if len(taints) > 0 {
cfg.Kubelet = types.Kubelet{
// NOTE(Hue): We set the worker taints here so that the charm
// can later prevent the user from changing these taints through charm config.
// These taints for the worker nodes are set by the `bootstrap-node-taints` charm config.
// https://github.com/canonical/k8s-operator/blob/bd9ebbda153053f9bfd6e66a93d2afb629a6cfd8/charms/worker/charmcraft.yaml#L67
WorkerTaints: utils.Pointer(taints),
}
}

serviceConfigs := types.K8sServiceConfigs{
ExtraNodeKubeletArgs: joinConfig.ExtraNodeKubeletArgs,
ExtraNodeKubeProxyArgs: joinConfig.ExtraNodeKubeProxyArgs,
Expand All @@ -254,7 +271,7 @@ func (a *App) onBootstrapWorkerNode(ctx context.Context, s state.State, encodedT
if err := setup.Containerd(snap, joinConfig.ExtraNodeContainerdConfig, joinConfig.ExtraNodeContainerdArgs); err != nil {
return fmt.Errorf("failed to configure containerd: %w", err)
}
if err := setup.KubeletWorker(snap, s.Name(), nodeIP, response.ClusterDNS, response.ClusterDomain, response.CloudProvider, joinConfig.ExtraNodeKubeletArgs); err != nil {
if err := setup.KubeletWorker(snap, s.Name(), nodeIP, response.ClusterDNS, response.ClusterDomain, response.CloudProvider, joinConfig.ExtraNodeKubeletArgs, taints); err != nil {
return fmt.Errorf("failed to configure kubelet: %w", err)
}
if err := setup.KubeProxy(ctx, snap, s.Name(), response.PodCIDR, localhostAddress, joinConfig.ExtraNodeKubeProxyArgs); err != nil {
Expand Down
4 changes: 2 additions & 2 deletions src/k8s/pkg/k8sd/setup/kubelet.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,8 @@ func KubeletControlPlane(snap snap.Snap, hostname string, nodeIP net.IP, cluster
}

// KubeletWorker configures kubelet on a worker node.
func KubeletWorker(snap snap.Snap, hostname string, nodeIP net.IP, clusterDNS string, clusterDomain string, cloudProvider string, extraArgs map[string]*string) error {
return kubelet(snap, hostname, nodeIP, clusterDNS, clusterDomain, cloudProvider, nil, kubeletWorkerLabels, extraArgs)
func KubeletWorker(snap snap.Snap, hostname string, nodeIP net.IP, clusterDNS string, clusterDomain string, cloudProvider string, extraArgs map[string]*string, taints []string) error {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Turns out that we were not passing the taints to Kubelet for worker nodes at all.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we didn't have a test for this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well we did test the kubelet package, but the "taints" were not being tested specifically.

return kubelet(snap, hostname, nodeIP, clusterDNS, clusterDomain, cloudProvider, taints, kubeletWorkerLabels, extraArgs)
}

// kubelet configures kubelet on the local node.
Expand Down
19 changes: 12 additions & 7 deletions src/k8s/pkg/k8sd/setup/kubelet_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package setup_test
import (
"net"
"path/filepath"
"strings"
"testing"

"github.com/canonical/k8s/pkg/k8sd/setup"
Expand Down Expand Up @@ -212,9 +213,10 @@ func TestKubelet(t *testing.T) {

// Create a mock snap
s := mustSetupSnapAndDirectories(t, setKubeletMock)
taints := []string{"taint1=", "taint2=value"}

// Call the kubelet worker setup function
g.Expect(setup.KubeletWorker(s, "dev", net.ParseIP("192.168.0.1"), "10.152.1.1", "test-cluster.local", "provider", nil)).To(Succeed())
g.Expect(setup.KubeletWorker(s, "dev", net.ParseIP("192.168.0.1"), "10.152.1.1", "test-cluster.local", "provider", nil, taints)).To(Succeed())

// Ensure the kubelet arguments file has the expected arguments and values
tests := []struct {
Expand All @@ -234,7 +236,7 @@ func TestKubelet(t *testing.T) {
{key: "--kubeconfig", expectedVal: filepath.Join(s.Mock.KubernetesConfigDir, "kubelet.conf")},
{key: "--node-labels", expectedVal: expectedWorkerLabels},
{key: "--read-only-port", expectedVal: "0"},
{key: "--register-with-taints", expectedVal: ""},
{key: "--register-with-taints", expectedVal: strings.Join(taints, ",")},
{key: "--root-dir", expectedVal: s.Mock.KubeletRootDir},
{key: "--serialize-image-pulls", expectedVal: "false"},
{key: "--tls-cipher-suites", expectedVal: kubeletTLSCipherSuites},
Expand Down Expand Up @@ -271,8 +273,10 @@ func TestKubelet(t *testing.T) {
"--cloud-provider": nil,
}

taints := []string{"taint1=", "taint2=value"}

// Call the kubelet worker setup function
g.Expect(setup.KubeletWorker(s, "dev", net.ParseIP("192.168.0.1"), "10.152.1.1", "test-cluster.local", "provider", extraArgs)).To(Succeed())
g.Expect(setup.KubeletWorker(s, "dev", net.ParseIP("192.168.0.1"), "10.152.1.1", "test-cluster.local", "provider", extraArgs, taints)).To(Succeed())

// Ensure the kubelet arguments file has the expected arguments and values
tests := []struct {
Expand All @@ -292,7 +296,7 @@ func TestKubelet(t *testing.T) {
{key: "--kubeconfig", expectedVal: filepath.Join(s.Mock.KubernetesConfigDir, "kubelet.conf")},
{key: "--node-labels", expectedVal: expectedWorkerLabels},
{key: "--read-only-port", expectedVal: "0"},
{key: "--register-with-taints", expectedVal: ""},
{key: "--register-with-taints", expectedVal: strings.Join(taints, ",")},
{key: "--root-dir", expectedVal: s.Mock.KubeletRootDir},
{key: "--serialize-image-pulls", expectedVal: "false"},
{key: "--tls-cipher-suites", expectedVal: kubeletTLSCipherSuites},
Expand Down Expand Up @@ -327,9 +331,10 @@ func TestKubelet(t *testing.T) {

// Create a mock snap
s := mustSetupSnapAndDirectories(t, setKubeletMock)
taints := []string{"taint1=", "taint2=value"}

// Call the kubelet worker setup function
g.Expect(setup.KubeletWorker(s, "dev", nil, "", "", "", nil)).To(Succeed())
g.Expect(setup.KubeletWorker(s, "dev", nil, "", "", "", nil, taints)).To(Succeed())

// Ensure the kubelet arguments file has the expected arguments and values
tests := []struct {
Expand All @@ -349,7 +354,7 @@ func TestKubelet(t *testing.T) {
{key: "--kubeconfig", expectedVal: filepath.Join(s.Mock.KubernetesConfigDir, "kubelet.conf")},
{key: "--node-labels", expectedVal: expectedWorkerLabels},
{key: "--read-only-port", expectedVal: "0"},
{key: "--register-with-taints", expectedVal: ""},
{key: "--register-with-taints", expectedVal: strings.Join(taints, ",")},
{key: "--root-dir", expectedVal: s.Mock.KubeletRootDir},
{key: "--serialize-image-pulls", expectedVal: "false"},
{key: "--tls-cipher-suites", expectedVal: kubeletTLSCipherSuites},
Expand Down Expand Up @@ -386,7 +391,7 @@ func TestKubelet(t *testing.T) {

s.Mock.ServiceArgumentsDir = "nonexistent"

g.Expect(setup.KubeletWorker(s, "dev", net.ParseIP("192.168.0.1"), "10.152.1.1", "test-cluster.local", "provider", nil)).ToNot(Succeed())
g.Expect(setup.KubeletWorker(s, "dev", net.ParseIP("192.168.0.1"), "10.152.1.1", "test-cluster.local", "provider", nil, nil)).ToNot(Succeed())
})

t.Run("HostnameOverride", func(t *testing.T) {
Expand Down
4 changes: 3 additions & 1 deletion src/k8s/pkg/k8sd/types/cluster_config_convert.go
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,9 @@ func ClusterConfigFromUserFacing(u apiv1.UserFacingClusterConfig) (ClusterConfig
func (c ClusterConfig) ToUserFacing() apiv1.UserFacingClusterConfig {
return apiv1.UserFacingClusterConfig{
Network: apiv1.NetworkConfig{
Enabled: c.Network.Enabled,
Enabled: c.Network.Enabled,
PodCIDR: c.Network.PodCIDR,
ServiceCIDR: c.Network.ServiceCIDR,
},
DNS: apiv1.DNSConfig{
Enabled: c.DNS.Enabled,
Expand Down
2 changes: 2 additions & 0 deletions src/k8s/pkg/k8sd/types/cluster_config_kubelet.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,14 @@ type Kubelet struct {
ClusterDNS *string `json:"cluster-dns,omitempty"`
ClusterDomain *string `json:"cluster-domain,omitempty"`
ControlPlaneTaints *[]string `json:"control-plane-taints,omitempty"`
WorkerTaints *[]string `json:"worker-taints,omitempty"`
}

func (c Kubelet) GetCloudProvider() string { return getField(c.CloudProvider) }
func (c Kubelet) GetClusterDNS() string { return getField(c.ClusterDNS) }
func (c Kubelet) GetClusterDomain() string { return getField(c.ClusterDomain) }
func (c Kubelet) GetControlPlaneTaints() []string { return getField(c.ControlPlaneTaints) }
func (c Kubelet) GetWorkerTaints() []string { return getField(c.WorkerTaints) }
func (c Kubelet) Empty() bool { return c == Kubelet{} }

// hash returns a sha256 sum from the Kubelet configuration.
Expand Down
1 change: 1 addition & 0 deletions src/k8s/pkg/k8sd/types/cluster_config_merge.go
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ func MergeClusterConfig(existing ClusterConfig, new ClusterConfig) (ClusterConfi
{name: "load balancer CIDRs", val: &config.LoadBalancer.CIDRs, old: existing.LoadBalancer.CIDRs, new: new.LoadBalancer.CIDRs, allowChange: true},
{name: "load balancer L2 interfaces", val: &config.LoadBalancer.L2Interfaces, old: existing.LoadBalancer.L2Interfaces, new: new.LoadBalancer.L2Interfaces, allowChange: true},
{name: "control-plane register with taints", val: &config.Kubelet.ControlPlaneTaints, old: existing.Kubelet.ControlPlaneTaints, new: new.Kubelet.ControlPlaneTaints, allowChange: false},
{name: "worker register with taints", val: &config.Kubelet.WorkerTaints, old: existing.Kubelet.WorkerTaints, new: new.Kubelet.WorkerTaints, allowChange: false},
} {
if *i.val, err = mergeSliceField(i.old, i.new, i.allowChange); err != nil {
return ClusterConfig{}, fmt.Errorf("prevented update of %s: %w", i.name, err)
Expand Down
35 changes: 35 additions & 0 deletions tests/integration/templates/bootstrap-cluster-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Contains the bootstrap configuration for the cluster config test.
control-plane-taints:
- "taint1=:PreferNoSchedule"
- "taint2=value:PreferNoSchedule"
cluster-config:
network:
enabled: true
dns:
enabled: true
ingress:
enabled: true
load-balancer:
enabled: true
local-storage:
enabled: true
gateway:
enabled: true
metrics-server:
enabled: true
extra-node-config-files:
bootstrap-extra-file.yaml: extra-args-test-file-content
extra-node-kube-apiserver-args:
--request-timeout: 2m
extra-node-kube-controller-manager-args:
--leader-elect-retry-period: 3s
extra-node-kube-scheduler-args:
--authorization-webhook-cache-authorized-ttl: 11s
extra-node-kube-proxy-args:
--config-sync-period: 14m
extra-node-kubelet-args:
--authentication-token-webhook-cache-ttl: 3m
extra-node-containerd-args:
--log-level: debug
extra-node-k8s-dqlite-args:
--watch-storage-available-size-interval: 6s
3 changes: 3 additions & 0 deletions tests/integration/templates/worker-join-cluster-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Contains the join configuration for the worker nodes in the cluster config test.
extra-node-kubelet-args:
"--register-with-taints": "workerTaint1=:PreferNoSchedule,workerTaint2=workerValue:PreferNoSchedule"
Loading
Loading