Skip to content

Commit

Permalink
chore(bors): merge pull request #457
Browse files Browse the repository at this point in the history
457: Helm Variables - Cherry-pick PR456 r=tiagolobocastro a=tiagolobocastro

<!

Co-authored-by: Tiago Castro <[email protected]>
  • Loading branch information
mayastor-bors and tiagolobocastro committed Mar 26, 2024
2 parents f3be001 + a361371 commit 0da7271
Show file tree
Hide file tree
Showing 5 changed files with 24 additions and 10 deletions.
2 changes: 1 addition & 1 deletion chart/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ This removes all the Kubernetes components associated with the chart and deletes
| io_engine.&ZeroWidthSpace;envcontext | Pass additional arguments to the Environment Abstraction Layer. Example: --set {product}.envcontext=iova-mode=pa | `""` |
| io_engine.&ZeroWidthSpace;logLevel | Log level for the io-engine service | `"info"` |
| io_engine.&ZeroWidthSpace;nodeSelector | Node selectors to designate storage nodes for diskpool creation Note that if multi-arch images support 'kubernetes.io/arch: amd64' should be removed. | <pre>{<br>"kubernetes.io/arch":"amd64",<br>"openebs.io/engine":"mayastor"<br>}</pre> |
| io_engine.&ZeroWidthSpace;nvme.&ZeroWidthSpace;ioTimeout | Timeout for IOs The default here is exaggerated for local disks but we've observed that in shared virtual environments having a higher timeout value is beneficial. In certain cases, you may have to set this to an even higher value. For example, in Hetzner we've had better results setting it to 300s. Please adjust this according to your hardware and needs. | `"110s"` |
| io_engine.&ZeroWidthSpace;nvme.&ZeroWidthSpace;ioTimeout | Timeout for IOs The default here is exaggerated for local disks, but we've observed that in shared virtual environments having a higher timeout value is beneficial. Please adjust this according to your hardware and needs. | `"110s"` |
| io_engine.&ZeroWidthSpace;nvme.&ZeroWidthSpace;tcp.&ZeroWidthSpace;maxQueueDepth | You may need to increase this for a higher outstanding IOs per volume | `"32"` |
| io_engine.&ZeroWidthSpace;priorityClassName | Set PriorityClass, overrides global | `""` |
| io_engine.&ZeroWidthSpace;resources.&ZeroWidthSpace;limits.&ZeroWidthSpace;cpu | Cpu limits for the io-engine | `""` |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,8 @@ spec:
- "--disable-ha"{{ end }}
- "--fmt-style={{ include "logFormat" . }}"
- "--ansi-colors={{ .Values.base.logging.color }}"
- "--create-volume-limit={{ .Values.agents.core.maxCreateVolume }}"{{ if .Values.agents.core.maxRebuilds }}
- "--max-rebuilds={{ .Values.agents.core.maxRebuilds }}"{{ end }}
ports:
- containerPort: 50051
env:
Expand Down
3 changes: 2 additions & 1 deletion chart/templates/mayastor/csi/csi-controller-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ spec:
- "--default-fstype=ext4"
- "--extra-create-metadata" # This is needed for volume group feature to work
- "--timeout=36s"
- "--worker-threads=10" # 10 for create and 10 for delete
- "--worker-threads={{ .Values.csi.controller.maxCreateVolume }}" # 10 for create and 10 for delete
{{- if default .Values.csi.controller.preventVolumeModeConversion }}
- "--prevent-volume-mode-conversion"
{{- end }}
Expand Down Expand Up @@ -123,6 +123,7 @@ spec:
{{- end }}
- "--ansi-colors={{ .Values.base.logging.color }}"
- "--fmt-style={{ include "logFormat" . }}"
- "--create-volume-limit={{ .Values.csi.controller.maxCreateVolume }}"
env:
- name: RUST_LOG
value: {{ .Values.csi.controller.logLevel }}
Expand Down
2 changes: 2 additions & 0 deletions chart/templates/mayastor/io/io-engine-daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ spec:
env:
- name: RUST_LOG
value: {{ .Values.io_engine.logLevel }}
- name: NVMF_TCP_MAX_QPAIRS_PER_CTRL
value: "{{ .Values.io_engine.nvme.tcp.maxQpairsPerCtrl }}"
- name: NVMF_TCP_MAX_QUEUE_DEPTH
value: "{{ .Values.io_engine.nvme.tcp.maxQueueDepth }}"
- name: NVME_TIMEOUT
Expand Down
25 changes: 17 additions & 8 deletions chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,17 @@ agents:
# Example: if this value is 40, the pool has 40GiB free, then the max volume size allowed
# to be snapped on the pool is 100GiB.
snapshotCommitment: "40%"
# -- If a faulted replica comes back online within this time period then it will be
# rebuilt using the partial rebuild capability (using a log of missed IO), hence a bit
# faster depending on the log size. Otherwise, the replica will be fully rebuilt.
# A blank value "" means internally derived value will be used.
partialRebuildWaitPeriod: ""
# The maximum number of system-wide rebuilds permitted at any given time.
# If set to an empty string, there are no limits.
maxRebuilds: ""
# The maximum number of concurrent create volume requests.
maxCreateVolume: 10

resources:
limits:
# -- Cpu limits for core agents
Expand All @@ -189,11 +200,6 @@ agents:
cpu: "500m"
# -- Memory requests for core agents
memory: "32Mi"
# -- If a faulted replica comes back online within this time period then it will be
# rebuilt using the partial rebuild capability (using a log of missed IO), hence a bit
# faster depending on the log size. Otherwise, the replica will be fully rebuilt.
# A blank value "" means internally derived value will be used.
partialRebuildWaitPeriod: ""
# -- Set tolerations, overrides global
tolerations: []
# -- Set PriorityClass, overrides global.
Expand Down Expand Up @@ -293,6 +299,8 @@ csi:
controller:
# -- Log level for the csi controller
logLevel: info
# The maximum number of concurrent create volume requests.
maxCreateVolume: 10
resources:
limits:
# -- Cpu limits for csi controller
Expand Down Expand Up @@ -368,10 +376,8 @@ io_engine:
crdt1: 30
nvme:
# -- Timeout for IOs
# The default here is exaggerated for local disks but we've observed that in
# The default here is exaggerated for local disks, but we've observed that in
# shared virtual environments having a higher timeout value is beneficial.
# In certain cases, you may have to set this to an even higher value. For example,
# in Hetzner we've had better results setting it to 300s.
# Please adjust this according to your hardware and needs.
ioTimeout: "110s"
# Timeout for admin commands
Expand All @@ -382,6 +388,9 @@ io_engine:
# -- Max size setting (both initiator and target) for an NVMe queue
# -- You may need to increase this for a higher outstanding IOs per volume
maxQueueDepth: "32"
# Max qpairs per controller.
maxQpairsPerCtrl: "32"


# -- Pass additional arguments to the Environment Abstraction Layer.
# Example: --set {product}.envcontext=iova-mode=pa
Expand Down

0 comments on commit 0da7271

Please sign in to comment.