Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: kubectl misconfiguration after run-local.sh #328

Open
tylerfanelli opened this issue Jan 17, 2024 · 5 comments
Open

tests: kubectl misconfiguration after run-local.sh #328

tylerfanelli opened this issue Jan 17, 2024 · 5 comments

Comments

@tylerfanelli
Copy link
Contributor

tylerfanelli commented Jan 17, 2024

Describe the bug
Once tests/e2e/run-local.sh is run, the next step in the quickstart guide is to deploy the operator. This fails immediately with
The connection to the server localhost:8080 was refused - did you specify the right host or port?

To Reproduce

$ git clone https://github.com/confidential-containers/operator.git
$ cd operator/tests/e2e
$ ./run-local.sh -r kata-qemu-snp

$ docker images

REPOSITORY                   TAG                    IMAGE ID       CREATED         SIZE
localhost:5000/cc-operator   latest                 48f322b96469   39 hours ago    54.2MB

$ kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.8.0
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Is there extra kubectl configuration that needs to be done?

@bpradipt @wainersm

@bpradipt
Copy link
Member

Hmm, I have used run-local.sh only for tests. However looking at the code, I suspect there needs to an extra config step if the cluster deployed with run-local.sh needs to be used.
See the following line - https://github.com/confidential-containers/operator/blob/main/tests/e2e/run-local.sh#L95

export KUBECONFIG=/etc/kubernetes/admin.conf

You can try setting this explicitly and run a basic kubectl command to verify

kubectl get nodes

If you hit any permissions issue, then you can copy the kubeconfig to $HOME/.kube/config and change the owner before running kubectl command.

@wainersm I think you are active user of run-local.sh :-). Any insights?

@bpradipt
Copy link
Member

bpradipt commented Jan 18, 2024

I spin up an env with run-local.sh and you can use either of the following approach to work with the cluster.
Note that when using run-local.sh, you don't need to install the operator again. run-local.sh already sets up everything based on the latest code.
I'll create a PR to make it explicit in the readme.

sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf <cmds>

or

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then you can run kubectl as a regular user

kubectl <cmds>

Complete examples

$ sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes
NAME       STATUS   ROLES           AGE   VERSION
fedora39   Ready    control-plane   19m   v1.24.0

$ sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -A
NAMESPACE                        NAME                                             READY   STATUS    RESTARTS        AGE
confidential-containers-system   cc-operator-controller-manager-ccbbcfdf7-h9j4n   2/2     Running   0               8m17s
confidential-containers-system   cc-operator-daemon-install-psqpd                 1/1     Running   2 (7m54s ago)   7m59s
confidential-containers-system   cc-operator-pre-install-daemon-c6rvl             1/1     Running   0               8m4s
kube-flannel                     kube-flannel-ds-fz495                            1/1     Running   0               18m
kube-system                      coredns-6d4b75cb6d-chsqz                         1/1     Running   0               18m
kube-system                      coredns-6d4b75cb6d-hnzqb                         1/1     Running   0               18m
kube-system                      etcd-fedora39                                    1/1     Running   0               19m
kube-system                      kube-apiserver-fedora39                          1/1     Running   0               19m
kube-system                      kube-controller-manager-fedora39                 1/1     Running   0               19m
kube-system                      kube-proxy-l627b                                 1/1     Running   0               18m
kube-system                      kube-scheduler-fedora39                          1/1     Running   0               19m

or after copying the kubeconfig file to $HOME/.kube/config

$ kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
fedora39   Ready    control-plane   19m   v1.24.0

@bpradipt
Copy link
Member

@tylerfanelli for now you can use the following to just deploy the cluster using the helper scripts in the operator repo.

Assuming you are in "$HOME/operator/tests/e2e", running the following will setup the cluster:

ansible-playbook -i localhost, -c local --tags untagged ansible/main.yml
export "PATH=$PATH:/usr/local/bin"
sudo -E PATH="$PATH" bash -c './cluster/up.sh'

On successful cluster setup, you'll see the instructions to set kubeconfig, ie

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

There is a good explanation of run-local.sh and kubeconfig setup in the operator development guide

@fitzthum @wainersm, for subsequent releases, I think it would be good to clarify the usage of run-local.sh or remove it altogether from quickstart to avoid confusions.

@fitzthum
Copy link
Member

Yeah maybe we should add a note about

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

to the quickstart guide. I think we mention it somewhere else.

@ldoktor
Copy link
Contributor

ldoktor commented Jan 19, 2024

@fitzthum Patches are welcome, I have kcli related recommendations pending as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants