Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Portworx: operator helm chart #481

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

cabrinha
Copy link

@cabrinha cabrinha commented May 4, 2023

What this PR does / why we need it: Update the helm chart to reflect industry standards, allow more customization and reduce complexity

Which issue(s) this PR fixes (optional)
Closes #

Special notes for your reviewer:

Copy link
Contributor

@sharma-tapas sharma-tapas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR covers a lot of changes, many tests have to be completed before this can be merged in.

  • DS to Operator migration
  • installing / uninstalling in custom namespace
  • passing in different drive specs and checking if install works
  • passing custom registry for all and individual images being used
  • spec generated with parameters that are either null / none
    If you can cover the above and provide the report with the PR that would help.
    Also, run helm lint to catch issues with formatting and residual git markers for merge.

command: ['/bin/sh',
'-c',
'kubectl -n {{ template "px.getDeploymentNamespace" . }} delete storagecluster {{ $clusterName }} --ignore-not-found']
'kubectl -n kube-system delete storagecluster {{ .Values.clusterName }} --ignore-not-found']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kube-system should be {{ template "px.getDeploymentNamespace" . }}

Copy link
Author

@cabrinha cabrinha May 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't it be {{ .Release.Namespace }} instead? Let's take a look at this function:

{{- define "px.getDeploymentNamespace" -}}

## This value will never be empty, no need to check `if` it exists
{{- if (.Release.Namespace) -}}

# If the .Release.Namespace is "default" then, evaluate to "kube-system" 
# These are two different namespaces, so I don't understand why if the chart is installed
# to the "default" namespace you'd be operating on resources in the "kube-system" namespace.
    {{- if (eq "default" .Release.Namespace) -}}
        {{- printf "kube-system"  -}}
    {{- else -}}
        {{- printf "%s" .Release.Namespace -}}
    {{- end -}}
{{- end -}}
{{- end -}}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Historically we install in the kube-system namespace as our default namespace and we want to keep the same behavior. So, if the user is not passing in a namespace that means they want to install in Portworx's default namespace i.e kube-system

Copy link
Author

@cabrinha cabrinha May 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to install a helm chart, you have to either specify a namespace or have it selected in your kube-config context. Also, what happens if I want to move namespaces from test to production? Should be thinking of cases where portworx was never installed to kube-system...

This is probably a good example of why the operator should be handling object discovery and deletion/preservation instead of a Job that runs kubectl commands.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This chart also helps in migrating from a DS based installs to Operator managed, operator handles the object discovery and uses the same namespace in which the DS was installed. What if user does not have the namespace context set, things would always endup in the default namespace.

@@ -1,8 +0,0 @@
{{- $secretType := .Values.secretType | default "k8s" }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did we delete this?

Copy link
Author

@cabrinha cabrinha May 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did we delete this?

We don't need to turn every helm value into a variable. This same thing can be expressed like so:

---
# values.yaml
secretType: k8s

---
# storageClass.yaml
{{ .Values.secretType }}

I don't understand why $secretType needs to defined as a variable when it can have a default value set in the values.yaml file and called using basic helm syntax in any template file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But if you check, here we are creating the Portworx namespace that should host the secret that the user wants to use, if we don't do it here, then we need to end up documenting that

  1. create a namespace called portworx
  2. then create your secret that you want us to use in that namespace.
    May be a better name for this variable will be createSecretNamespace and if that is set to true or we detect secret type as k8s we create the namespace.

Copy link
Author

@cabrinha cabrinha May 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't we want the secret created in whatever namespace the chart is being installed to?

i.e.: {{ Release.Namespace }}

What happens if I install the chart to test namespace or kube-system namespace?

Also, I do see the chart has a Namespace template, but I think helm charts should not be creating namespaces. Users should be doing that and the current Namespace template doesn't allow annotations, labels, etc.

Perhaps we should consider an extraObjects template:

# extra-objects.yaml
{{ range .Values.extraObjects }}
---
{{ tpl (toYaml .) $ }}
{{ end }}
# values.yaml
---
# -- Array of extra K8s manifests to deploy
extraObjects: []
# - apiVersion: v1
#   kind: Namespace
#   metadata:
#     annotations:
#       custom/annotation: true
#     labels:
#       custom/label: false
#     name: portworx

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be a fair addition, let's do this.

{{- if and (ne $journalDevice "none") (or (hasPrefix "/" $journalDevice) (eq "auto" $journalDevice)) }}
journalDevice: {{ $journalDevice }}
{{- end }}
{{- toYaml . | nindent 4 }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did we test this? Following cases should be covered:

  • Specifying disks as /dev/sda
  • With type=gp2,size=150,tag=key
  • not specifying any drive spec

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only have a limited set of cluster setups to test this on. Is the portworx team able to test this against the various types of supported clusters? AKS, OpenShift, etc.

@sharma-tapas
Copy link
Contributor

Can you also add the following to the PR
helm pacakge
helm index
and create it in repo/beta directory.
Steps:

  • cd helm/charts/portworx
  • helm package . # there is . at the end
  • mkdir -p helm/repo/beta
  • mv helm/charts/portworx/portworx-2.13.3.tgz helm/repo/beta
  • cd helm/repo/beta
  • helm repo index . --url https://raw.githubusercontent.com/portworx/helm/<your_branch_name>/repo/stable
    Add the resultant files to the PR, that way we can consume this as a helm chart for testing.

Scott Cabrinha and others added 2 commits May 17, 2023 08:01
@quercus-carsten
Copy link

Hi everyone,
sorry to burst into this - I just wanted to encourage you to go ahead with this. I found this PR while I am in the process of using kustomize to achieve some core changes you already have build in here. These changes would definitely be much appreciated. Like I need to get rid of namespace creation ( which is easy ) but the namespace is hard coded e. g. in a batch job - which needs an ugly replacement procedure ...
Anyway, this definitely makes it harder and more time consuming to implement a POC in my/our case. Please make this PR happen if you can :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants