Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flag provided but not defined: -tenant-cluster-kubeconfig #117

Open
gcezaralmeida opened this issue Aug 2, 2024 · 4 comments
Open

flag provided but not defined: -tenant-cluster-kubeconfig #117

gcezaralmeida opened this issue Aug 2, 2024 · 4 comments
Labels

Comments

@gcezaralmeida
Copy link

What happened:
Apparently the flags specified on the YAML doesn't match within the container. I using the manifest on the main brain that is using the image with the latest tag.

Infra-controller is not starting. It is showing the error below.
$ kubectl logs kubevirt-csi-controller-85d9ff486d-bwnqj

Defaulted container "csi-driver" out of: csi-driver, csi-provisioner, csi-attacher, csi-liveness-probe, csi-snapshotter
flag provided but not defined: -tenant-cluster-kubeconfig
Usage of ./kubevirt-csi-driver:
  -alsologtostderr
        log to standard error as well as files
  -endpoint string
        CSI endpoint (default "unix:/csi/csi.sock")
  -infra-cluster-kubeconfig string
        the infra-cluster kubeconfig file
  -infra-cluster-labels string
        The infra-cluster labels to use when creating resources in infra cluster. 'name=value' fields separated by a comma
  -infra-cluster-namespace string
        The infra-cluster namespace
  -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
  -log_dir string
        If non-empty, write log files in this directory
  -logtostderr
        log to standard error instead of files
  -namespace string
        Namespace to run the controllers on
  -node-name string
        The node name - the node this pods runs on
  -stderrthreshold value
        logs at or above this threshold go to stderr
  -v value
        log level for V logs
  -vmodule value
        comma-separated list of pattern=N settings for file-filtered logging

In the workload cluster, we have the same issue for the flag on the daemonset.

$ kubectl logs -n kubevirt-csi-driver kubevirt-csi-node-74xjj
Defaulted container "csi-driver" out of: csi-driver, csi-node-driver-registrar, csi-liveness-probe
flag provided but not defined: -run-node-service
Usage of ./kubevirt-csi-driver:
  -alsologtostderr
        log to standard error as well as files
  -endpoint string
        CSI endpoint (default "unix:/csi/csi.sock")
  -infra-cluster-kubeconfig string
        the infra-cluster kubeconfig file
  -infra-cluster-labels string
        The infra-cluster labels to use when creating resources in infra cluster. 'name=value' fields separated by a comma
  -infra-cluster-namespace string
        The infra-cluster namespace
  -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
  -log_dir string
        If non-empty, write log files in this directory
  -logtostderr
        log to standard error instead of files
  -namespace string
        Namespace to run the controllers on
  -node-name string
        The node name - the node this pods runs on
  -stderrthreshold value
        logs at or above this threshold go to stderr
  -v value
        log level for V logs
  -vmodule value
        comma-separated list of pattern=N settings for file-filtered logging

How to reproduce it (as minimally and precisely as possible):
I'm using the main branch to deploy the infra-controller.

Additional context:
Add any other context about the problem here.

Environment:

  • KubeVirt version (use virtctl version): 1.3.0
  • Kubernetes version (use kubectl version): v1.28.10+rke2r1
  • Cloud provider or hardware configuration: Kubevirt
  • OS (e.g. from /etc/os-release): Ubuntu 22.04 - quay.io/capk/ubuntu-2204-container-disk:v1.28.10
@awels
Copy link
Member

awels commented Aug 2, 2024

So there are 2 problems.

  1. We don't have a proper release process yet, so we can't point you at the proper containers to use.
  2. The containers in the manifest are 3 years old. They should point at quay.io/kubevirt/kubevirt-csi-driver:latest instead of quay.io/kubevirt/csi-driver:latest The ones in kubevirt-csi-driver are only 2 weeks old and should have the parameters.

@gcezaralmeida
Copy link
Author

Thanks Alexander. I will test it.

@lukasmrtvy
Copy link

@awels and when can we expect the release process? Thanks

@awels
Copy link
Member

awels commented Sep 24, 2024

I have updated the latest container to match main a few days ago. I am currently swamped with other tasks, so I have no ETA for a proper release process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants