Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PERFSCALE-3282] Use slurp rather than lookup to fetch ssh pub key #564

Closed

Conversation

rsevilla87
Copy link
Member

@rsevilla87 rsevilla87 commented Oct 24, 2024

Thanks to this simple fix, we can now have a fully working BM cluster in a remote allocation (when jetlag is not executed from the bastion node), the lookup plugin looks for the file in the ansible controller rather than the bastion.

closes: #516

@rsevilla87 rsevilla87 added the enhancement New feature or request label Oct 24, 2024
Signed-off-by: Raul Sevilla <[email protected]>
@rsevilla87 rsevilla87 changed the title Use slurp rather than lookup to fetch ssh pub key [PERFSCALE-3282] Use slurp rather than lookup to fetch ssh pub key Oct 24, 2024
@akrzos
Copy link
Member

akrzos commented Oct 25, 2024

Thanks to this simple fix, we can now have a fully working BM cluster in a remote allocation (when jetlag is not executed from the bastion node), the lookup plugin looks for the file in the ansible controller rather than the bastion.

closes: #516

I do not think this actually closes #516 because AIUI the RFE was to deploy from one bastion into an entirely separate cloud - "it should be possible to deploy cloud y from a bastion in cloud x leveraging jetlag." Am I correct here @josecastillolema ?

I'm also curious if there is a reason we can not just scp the correct ssh public key down to the host running the jetlag playbooks instead of adjusting the behavior here?

I am concerned that with the fairly wide spread usage of the var ssh_public_key_file that this slight adjustment to one spot where it is consumed means different behavior depending on how the playbooks are ran. We should either wide spread adjust all spots to match the behavior perhaps? I would like to express that the preferred method to run jetlag is always from the bastion machine but I understand there could be different implementations(such as a CI or CPT perhaps). In such an example of running it from your local laptop in which the playbooks ssh to your bastion, the preferred solution here is to maintain a copy of the correct ssh keys on your laptop then (This is what I do for that instance).

@akrzos akrzos requested a review from radez October 25, 2024 19:35
@rsevilla87
Copy link
Member Author

Hey @akrzos, answering some of your doubts here.

So, without this PR I've been able to deploy a MNO cluster in an allocation different from the ansible controller (the host executing ansible), i.e: I managed to deploy cloud19 from cloud 31 using the first node of cloud19 as bastion, I've discussed this behaviour personally with @josecastillolema, and this approach would be perfect for us basically because we need to be able to deploy clusters in arbitrary allocations from the same IP address.

The only problem of this PR was that the public SSH key exported to the AI was the one in the ansible controller and not the one from the bastion, your suggestion of copying the ansible controller one to the remote bastion is a perfectly valid solution. (even better than mine because the user only will have to take care of the pub SSH key from the host executing jetlag) , Im gonna submit those changes

@josecastillolema
Copy link
Collaborator

Exactly @akrzos , this is for enabling CI/CPT scenarios (including the jetlag CI itself).
Thanks @rsevilla87 for making the requested adjustment.

@akrzos
Copy link
Member

akrzos commented Oct 28, 2024

Hey @rsevilla87 and @josecastillolema there is still some confusion here. There does not need to be any changes to jetlag. When I mentioned to scp the key I meant instead of the playbook "force" copying your local key to a managed or remote node (and thus overwriting it), I meant for the user/CI operator to copy the desired key down to the controller node beforehand and adjust the var ssh_public_key_file to the location of the local copy of the ssh key. This can be done today without any modification or PR to jetlag and this is how I run jetlag locally from my laptop when testing PRs etc. You can do this with your CI/CPT machines as well.

In another example, when I run jetlag from the bastion of a cloud lets say it is cloud99 I do not adjust ssh_public_key_file however if I run this where my controller node is my laptop then I first copy the public ssh key file and then have ssh_public_key_file set.

[akrzos@fedora cool-project]$ scp [email protected]:/root/.ssh/id_rsa.pub cloud99_id_rsa.pub
id_rsa.pub                                                                     100%  600     6.2KB/s   00:00
[akrzos@fedora cool-project]$ 
ssh_public_key_file: /home/akrzos/cool-project/cloud99_id_rsa.pub

Now I can run with my controller node as my laptop instead of as the bastion and the created cluster will be reachable va the bastion keys the same way as if I ran the playbook directly from the bastion.

@rsevilla87
Copy link
Member Author

s can be done today without any modification or PR to jetlag and this is how I run jetlag locally from my laptop when testing PRs etc. You can do this with your CI/CPT machines as well.

In another example, when I run jetlag from the bastion of a cloud lets say it is cloud99 I do not adjust

ok, sounds reasonable. We can move forward w/o this PR too. Closing

@rsevilla87 rsevilla87 closed this Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[PERFSCALE-3282] [RFE] Jetlag L3 deployments
3 participants