You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
Apparently the flags specified on the YAML doesn't match within the container. I using the manifest on the main brain that is using the image with the latest tag.
Infra-controller is not starting. It is showing the error below.
$ kubectl logs kubevirt-csi-controller-85d9ff486d-bwnqj
Defaulted container "csi-driver" out of: csi-driver, csi-provisioner, csi-attacher, csi-liveness-probe, csi-snapshotter
flag provided but not defined: -tenant-cluster-kubeconfig
Usage of ./kubevirt-csi-driver:
-alsologtostderr
log to standard error as well as files
-endpoint string
CSI endpoint (default "unix:/csi/csi.sock")
-infra-cluster-kubeconfig string
the infra-cluster kubeconfig file
-infra-cluster-labels string
The infra-cluster labels to use when creating resources in infra cluster. 'name=value' fields separated by a comma
-infra-cluster-namespace string
The infra-cluster namespace
-log_backtrace_at value
when logging hits line file:N, emit a stack trace
-log_dir string
If non-empty, write log files in this directory
-logtostderr
log to standard error instead of files
-namespace string
Namespace to run the controllers on
-node-name string
The node name - the node this pods runs on
-stderrthreshold value
logs at or above this threshold go to stderr
-v value
log level for V logs
-vmodule value
comma-separated list of pattern=N settings for file-filtered logging
In the workload cluster, we have the same issue for the flag on the daemonset.
$ kubectl logs -n kubevirt-csi-driver kubevirt-csi-node-74xjj
Defaulted container "csi-driver" out of: csi-driver, csi-node-driver-registrar, csi-liveness-probe
flag provided but not defined: -run-node-service
Usage of ./kubevirt-csi-driver:
-alsologtostderr
log to standard error as well as files
-endpoint string
CSI endpoint (default "unix:/csi/csi.sock")
-infra-cluster-kubeconfig string
the infra-cluster kubeconfig file
-infra-cluster-labels string
The infra-cluster labels to use when creating resources in infra cluster. 'name=value' fields separated by a comma
-infra-cluster-namespace string
The infra-cluster namespace
-log_backtrace_at value
when logging hits line file:N, emit a stack trace
-log_dir string
If non-empty, write log files in this directory
-logtostderr
log to standard error instead of files
-namespace string
Namespace to run the controllers on
-node-name string
The node name - the node this pods runs on
-stderrthreshold value
logs at or above this threshold go to stderr
-v value
log level for V logs
-vmodule value
comma-separated list of pattern=N settings for file-filtered logging
How to reproduce it (as minimally and precisely as possible):
I'm using the main branch to deploy the infra-controller.
Additional context:
Add any other context about the problem here.
Environment:
KubeVirt version (use virtctl version): 1.3.0
Kubernetes version (use kubectl version): v1.28.10+rke2r1
Cloud provider or hardware configuration: Kubevirt
OS (e.g. from /etc/os-release): Ubuntu 22.04 - quay.io/capk/ubuntu-2204-container-disk:v1.28.10
The text was updated successfully, but these errors were encountered:
We don't have a proper release process yet, so we can't point you at the proper containers to use.
The containers in the manifest are 3 years old. They should point at quay.io/kubevirt/kubevirt-csi-driver:latest instead of quay.io/kubevirt/csi-driver:latest The ones in kubevirt-csi-driver are only 2 weeks old and should have the parameters.
I have updated the latest container to match main a few days ago. I am currently swamped with other tasks, so I have no ETA for a proper release process.
What happened:
Apparently the flags specified on the YAML doesn't match within the container. I using the manifest on the main brain that is using the image with the latest tag.
Infra-controller is not starting. It is showing the error below.
$ kubectl logs kubevirt-csi-controller-85d9ff486d-bwnqj
In the workload cluster, we have the same issue for the flag on the daemonset.
How to reproduce it (as minimally and precisely as possible):
I'm using the main branch to deploy the infra-controller.
Additional context:
Add any other context about the problem here.
Environment:
virtctl version
): 1.3.0kubectl version
): v1.28.10+rke2r1The text was updated successfully, but these errors were encountered: