-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PERFSCALE-3282] Use slurp rather than lookup to fetch ssh pub key #564
Conversation
Signed-off-by: Raul Sevilla <[email protected]>
Signed-off-by: Raul Sevilla <[email protected]>
274f71e
to
f567ba4
Compare
I do not think this actually closes #516 because AIUI the RFE was to deploy from one bastion into an entirely separate cloud - "it should be possible to deploy cloud y from a bastion in cloud x leveraging jetlag." Am I correct here @josecastillolema ? I'm also curious if there is a reason we can not just scp the correct ssh public key down to the host running the jetlag playbooks instead of adjusting the behavior here? I am concerned that with the fairly wide spread usage of the var |
Hey @akrzos, answering some of your doubts here. So, without this PR I've been able to deploy a MNO cluster in an allocation different from the ansible controller (the host executing ansible), i.e: I managed to deploy cloud19 from cloud 31 using the first node of cloud19 as bastion, I've discussed this behaviour personally with @josecastillolema, and this approach would be perfect for us basically because we need to be able to deploy clusters in arbitrary allocations from the same IP address. The only problem of this PR was that the public SSH key exported to the AI was the one in the ansible controller and not the one from the bastion, your suggestion of copying the ansible controller one to the remote bastion is a perfectly valid solution. (even better than mine because the user only will have to take care of the pub SSH key from the host executing jetlag) , Im gonna submit those changes |
Exactly @akrzos , this is for enabling CI/CPT scenarios (including the jetlag CI itself). |
Signed-off-by: Raul Sevilla <[email protected]>
Hey @rsevilla87 and @josecastillolema there is still some confusion here. There does not need to be any changes to jetlag. When I mentioned to scp the key I meant instead of the playbook "force" copying your local key to a managed or remote node (and thus overwriting it), I meant for the user/CI operator to copy the desired key down to the controller node beforehand and adjust the var In another example, when I run jetlag from the bastion of a cloud lets say it is cloud99 I do not adjust [akrzos@fedora cool-project]$ scp [email protected]:/root/.ssh/id_rsa.pub cloud99_id_rsa.pub
id_rsa.pub 100% 600 6.2KB/s 00:00
[akrzos@fedora cool-project]$ ssh_public_key_file: /home/akrzos/cool-project/cloud99_id_rsa.pub Now I can run with my controller node as my laptop instead of as the bastion and the created cluster will be reachable va the bastion keys the same way as if I ran the playbook directly from the bastion. |
ok, sounds reasonable. We can move forward w/o this PR too. Closing |
Thanks to this simple fix, we can now have a fully working BM cluster in a remote allocation (when jetlag is not executed from the bastion node), the lookup plugin looks for the file in the ansible controller rather than the bastion.
closes: #516