-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Portworx: operator helm chart #481
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR covers a lot of changes, many tests have to be completed before this can be merged in.
- DS to Operator migration
- installing / uninstalling in custom namespace
- passing in different drive specs and checking if install works
- passing custom registry for all and individual images being used
- spec generated with parameters that are either null / none
If you can cover the above and provide the report with the PR that would help.
Also, runhelm lint
to catch issues with formatting and residual git markers for merge.
charts/portworx/templates/hooks/post-install/px-create-cluster-token.yaml
Outdated
Show resolved
Hide resolved
charts/portworx/templates/hooks/post-install/px-create-cluster-token.yaml
Outdated
Show resolved
Hide resolved
charts/portworx/templates/hooks/post-install/px-create-cluster-token.yaml
Outdated
Show resolved
Hide resolved
command: ['/bin/sh', | ||
'-c', | ||
'kubectl -n {{ template "px.getDeploymentNamespace" . }} delete storagecluster {{ $clusterName }} --ignore-not-found'] | ||
'kubectl -n kube-system delete storagecluster {{ .Values.clusterName }} --ignore-not-found'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kube-system should be {{ template "px.getDeploymentNamespace" . }}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't it be {{ .Release.Namespace }}
instead? Let's take a look at this function:
{{- define "px.getDeploymentNamespace" -}}
## This value will never be empty, no need to check `if` it exists
{{- if (.Release.Namespace) -}}
# If the .Release.Namespace is "default" then, evaluate to "kube-system"
# These are two different namespaces, so I don't understand why if the chart is installed
# to the "default" namespace you'd be operating on resources in the "kube-system" namespace.
{{- if (eq "default" .Release.Namespace) -}}
{{- printf "kube-system" -}}
{{- else -}}
{{- printf "%s" .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{- end -}}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Historically we install in the kube-system namespace as our default namespace and we want to keep the same behavior. So, if the user is not passing in a namespace that means they want to install in Portworx's default namespace i.e kube-system
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to install a helm chart, you have to either specify a namespace or have it selected in your kube-config context. Also, what happens if I want to move namespaces from test
to production
? Should be thinking of cases where portworx was never installed to kube-system
...
This is probably a good example of why the operator should be handling object discovery and deletion/preservation instead of a Job that runs kubectl commands.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This chart also helps in migrating from a DS based installs to Operator managed, operator handles the object discovery and uses the same namespace in which the DS was installed. What if user does not have the namespace context set, things would always endup in the default namespace.
@@ -1,8 +0,0 @@ | |||
{{- $secretType := .Values.secretType | default "k8s" }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did we delete this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did we delete this?
We don't need to turn every helm value into a variable. This same thing can be expressed like so:
---
# values.yaml
secretType: k8s
---
# storageClass.yaml
{{ .Values.secretType }}
I don't understand why $secretType
needs to defined as a variable when it can have a default value set in the values.yaml file and called using basic helm syntax in any template file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if you check, here we are creating the Portworx namespace that should host the secret that the user wants to use, if we don't do it here, then we need to end up documenting that
- create a namespace called portworx
- then create your secret that you want us to use in that namespace.
May be a better name for this variable will becreateSecretNamespace
and if that is set to true or we detect secret type ask8s
we create the namespace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't we want the secret created in whatever namespace the chart is being installed to?
i.e.: {{ Release.Namespace }}
What happens if I install the chart to test
namespace or kube-system
namespace?
Also, I do see the chart has a Namespace
template, but I think helm charts should not be creating namespaces. Users should be doing that and the current Namespace template doesn't allow annotations, labels, etc.
Perhaps we should consider an extraObjects
template:
# extra-objects.yaml
{{ range .Values.extraObjects }}
---
{{ tpl (toYaml .) $ }}
{{ end }}
# values.yaml
---
# -- Array of extra K8s manifests to deploy
extraObjects: []
# - apiVersion: v1
# kind: Namespace
# metadata:
# annotations:
# custom/annotation: true
# labels:
# custom/label: false
# name: portworx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to be a fair addition, let's do this.
{{- if and (ne $journalDevice "none") (or (hasPrefix "/" $journalDevice) (eq "auto" $journalDevice)) }} | ||
journalDevice: {{ $journalDevice }} | ||
{{- end }} | ||
{{- toYaml . | nindent 4 }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we test this? Following cases should be covered:
- Specifying disks as /dev/sda
- With type=gp2,size=150,tag=key
- not specifying any drive spec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only have a limited set of cluster setups to test this on. Is the portworx team able to test this against the various types of supported clusters? AKS, OpenShift, etc.
Can you also add the following to the PR
|
Co-authored-by: Andrii Kezikov <[email protected]>
Co-authored-by: Andrii Kezikov <[email protected]>
Hi everyone, |
What this PR does / why we need it: Update the helm chart to reflect industry standards, allow more customization and reduce complexity
Which issue(s) this PR fixes (optional)
Closes #
Special notes for your reviewer: