-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(k8sd): Add datastore and nodeTaints to the GetClusterConfig response #1065
base: main
Are you sure you want to change the base?
feat(k8sd): Add datastore and nodeTaints to the GetClusterConfig response #1065
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, please review suggestions before merge
Apparently it needs further adjustments. We need to store more data. |
|
ed1d7f7
to
ba06cac
Compare
src/k8s/go.mod
Outdated
@@ -7,7 +7,7 @@ toolchain go1.23.4 | |||
require ( | |||
dario.cat/mergo v1.0.0 | |||
github.com/canonical/go-dqlite/v2 v2.0.0 | |||
github.com/canonical/k8s-snap-api v1.0.17 | |||
github.com/canonical/k8s-snap-api v1.0.18-0.20250214142145-772178e23c39 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs to change whenever the api is bumped.
@@ -41,8 +41,8 @@ func KubeletControlPlane(snap snap.Snap, hostname string, nodeIP net.IP, cluster | |||
} | |||
|
|||
// KubeletWorker configures kubelet on a worker node. | |||
func KubeletWorker(snap snap.Snap, hostname string, nodeIP net.IP, clusterDNS string, clusterDomain string, cloudProvider string, extraArgs map[string]*string) error { | |||
return kubelet(snap, hostname, nodeIP, clusterDNS, clusterDomain, cloudProvider, nil, kubeletWorkerLabels, extraArgs) | |||
func KubeletWorker(snap snap.Snap, hostname string, nodeIP net.IP, clusterDNS string, clusterDomain string, cloudProvider string, extraArgs map[string]*string, taints []string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out that we were not passing the taints to Kubelet for worker nodes at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we didn't have a test for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well we did test the kubelet package, but the "taints" were not being tested specifically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we want to handle the per-node bootstrap config? For worker nodes this is simpler since the db is per node already, what about the joining control plane nodes? Seems like we store a single key value for all?
Interesting comment @berkayoz. Thanks for the attention to details. Sparked a conversation in MM. |
Needs further investigation. Converting to draft for now. |
44d55b9
to
b375077
Compare
assert cp_resp["datastore"] == exp_cp_datastore, f"Mismatch in {cp_resp['datastore']} and {exp_cp_datastore=}" | ||
assert cp_resp["nodeTaints"] == exp_cp_taints, f"Mismatch in {cp_resp['nodeTaints']} and {exp_cp_taints=}" | ||
assert worker_resp["nodeTaints"] == exp_worker_taints, f"Mismatch in {worker_resp['nodeTaints']} and {exp_worker_taints=}" | ||
assert cp_resp["status"]["network"] == exp_cp_config["network"], f"Mismatch in {cp_resp['status']['network']} and {exp_cp_config['network']=}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason we can't simply check the cp_resp["status"] == exp_cp_config
is that cp_resp["status"]["dns"]["service-ip"]
changes each time.
f31c8de
to
a14fe1e
Compare
@@ -49,7 +50,11 @@ func getStructTypeFromDoc(packageDoc *doc.Package, structName string) (*ast.Stru | |||
|
|||
func parsePackageDir(packageDir string) (*ast.Package, error) { | |||
fset := token.NewFileSet() | |||
packages, err := parser.ParseDir(fset, packageDir, nil, parser.ParseComments) | |||
// NOTE(Hue): We only want to parse non-test files. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On my local machine, the make go.doc
basically deleted all the generated docs due to multiple packages error. I don't know how this seems to be working on the CI. Had to make this change to be able to run locally. I think this issue was due to a very recent change in k8s-snap-api where we added a test file: https://github.com/canonical/k8s-snap-api/pull/26/files
a14fe1e
to
47586fb
Compare
47586fb
to
55bd546
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall, some small nits and let's rename the PR title (also conventional commit structure would be nice)
@@ -112,7 +112,7 @@ func (e *Endpoints) Endpoints() []rest.Endpoint { | |||
Name: "ClusterConfig", | |||
Path: apiv1.GetClusterConfigRPC, // == apiv1.SetClusterConfigRPC | |||
Put: rest.EndpointAction{Handler: e.putClusterConfig, AccessHandler: e.restrictWorkers}, | |||
Get: rest.EndpointAction{Handler: e.getClusterConfig, AccessHandler: e.restrictWorkers}, | |||
Get: rest.EndpointAction{Handler: e.getClusterConfig}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this affect k8s get
? Do we still see this command is not available on workers message?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a really good catch. I'll add an isWorker
check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added this check in k8s_get.go
and k8s_x_snapd_config.go
.
if taintsStr, ok := joinConfig.ExtraNodeKubeletArgs["--register-with-taints"]; ok { | ||
if taintsStr != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if taintsStr, ok := joinConfig.ExtraNodeKubeletArgs["--register-with-taints"]; ok { | |
if taintsStr != nil { | |
if taintsStr, ok := joinConfig.ExtraNodeKubeletArgs["--register-with-taints"]; ok && taintsStr != nil { |
@@ -78,6 +78,7 @@ func MergeClusterConfig(existing ClusterConfig, new ClusterConfig) (ClusterConfi | |||
{name: "load balancer CIDRs", val: &config.LoadBalancer.CIDRs, old: existing.LoadBalancer.CIDRs, new: new.LoadBalancer.CIDRs, allowChange: true}, | |||
{name: "load balancer L2 interfaces", val: &config.LoadBalancer.L2Interfaces, old: existing.LoadBalancer.L2Interfaces, new: new.LoadBalancer.L2Interfaces, allowChange: true}, | |||
{name: "control-plane register with taints", val: &config.Kubelet.ControlPlaneTaints, old: existing.Kubelet.ControlPlaneTaints, new: new.Kubelet.ControlPlaneTaints, allowChange: false}, | |||
{name: "worker kubelet register with taints", val: &config.Kubelet.WorkerTaints, old: existing.Kubelet.WorkerTaints, new: new.Kubelet.WorkerTaints, allowChange: false}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{name: "worker kubelet register with taints", val: &config.Kubelet.WorkerTaints, old: existing.Kubelet.WorkerTaints, new: new.Kubelet.WorkerTaints, allowChange: false}, | |
{name: "worker register with taints", val: &config.Kubelet.WorkerTaints, old: existing.Kubelet.WorkerTaints, new: new.Kubelet.WorkerTaints, allowChange: false}, |
🚫 Depends on canonical/k8s-snap-api#24 🚫
Overview
This PR adds datastore and nodeTaints to the GetClusterConfig response.
Rationale
k8s-operator
needs to know the cluster configuration. That's because we need to prevent the user from changing thebootstrap-
charm configs that are not supposed to be changed after the the cluster is bootstrapped.