-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] add code to make all hosts known to each other to avoid issues at deployment time #55
Comments
|
is this still an issue? |
I still think so. |
I wonder what this should look like? Perhaps a role which can optionally be included which sets up /etc/hosts entries for each of the nodes? |
I wonder if we should just add the ignore host check to ansible.cfg |
As a workaround, I used the below as part of my preflight tasks. - name: Create 'aap_install_user' for installer to use
ansible.builtin.user:
name: "{{ aap_install_user }}"
comment: "{{ aap_install_user }} orchestrator user"
home: "/home/{{ aap_install_user }}"
groups: "wheel"
password: "{{ aap_install_user_password }}"
- name: Get the aap_install_user's password expiry
ansible.builtin.shell: >-
set -o pipefail &&
chage -l {{ aap_install_user }} | sed -n "2p" | sed "s/.*: //g"
when: not ansible_check_mode
register: aap_install_user_expiry
changed_when: no
- name: Set the aap_install_user password to never expire
ansible.builtin.command: "chage -M -1 {{ aap_install_user }}"
when: aap_install_user_expiry.stdout != "never"
- name: Allow passwordless sudo for {{ aap_install_user }}
ansible.builtin.template:
src: install_user_sudoers_file.j2
dest: "/etc/sudoers.d/{{ aap_install_user }}"
mode: "600"
owner: root
group: root
- name: Grab ssh host_key from all nodes
ansible.builtin.slurp:
src: /etc/ssh/ssh_host_ecdsa_key.pub
register: ssh_host_key
- name: Do stuff on the orchestrator_node
when: orchestrator_node is defined
block:
- name: Verify orchestrator_node .ssh directory exists
ansible.builtin.file:
path: "/root/.ssh"
state: directory
owner: root
group: root
mode: "0700"
- name: Generate a new ssh public private key pair on the orchestrator_node
community.crypto.openssh_keypair:
path: /root/.ssh/id_rsa
type: rsa
size: 4096
state: present
comment: "ansible automation platform installer node"
- name: Grab ssh public key from control node
ansible.builtin.slurp:
src: /root/.ssh/id_rsa.pub
register: ssh_public_key
- name: Install sshd public keys for all hosts to install node known_hosts
ansible.builtin.known_hosts:
path: /root/.ssh/known_hosts
name: "{{ item }}"
key: "{{ item }},{{ hostvars[item].ansible_host }} {{ hostvars[item].ssh_host_key.content | b64decode }}"
state: present
loop: "{{ groups.all }}"
- name: Install authorized ssh key for control node on all hosts
ansible.posix.authorized_key:
user: "{{ aap_install_user }}"
state: present
key: "{{ hostvars[orchestrator_node_host_vars.inventory_hostname].ssh_public_key.content | b64decode }}"
``` |
We have decided that this is a prerequisite for this collection to work because you need to provide ssh keys already at this point you should have handled host checking in some way. |
Assuming we create the hosts fully automatically, they aren't known to each other, and the setup.sh can't work properly.
The text was updated successfully, but these errors were encountered: