Managed to run vagrant on M1:
- Follow the steps from the article: https://habr.com/ru/companies/bar/articles/708950/
- run vmware, get a free or trial license. then, quit application
- write a new vagrantfile, example: https://pastebin.com/HKQTYBYx
- run vagrant up from the directory with vagrantfile
to run vagrant infra:
vagrant up
to see current status of the boxes:
vagrant status
to connect to the server using ssh (where appserver is vm name described in vagrantfile)
vagrant ssh appserver
to run checks against the ansible scenario using "db" host only:
ansible-playbook reddit_app.yml --check --limit db
to run every task marked with the app-tag
tag:
ansible-playbook reddit_app2.yml --tags app-tag
to run batch playbooks:
ansible-playbook site.yml
Whereas Terraform is responsible for creating VMs and accompanying infrastructure, Ansible allows to set things up on previously created VMs.
The line below removes reddit cloned repository from app server:
ansible app -m command -a 'rm -rf ~/reddit'
ansible appserver -i ./inventory -m ping
ansible dbserver -i ./inventory -m ping
ansible appserver -m command -a uptime
ansible dbserver -m command -a uptime
ansible app -m command -a 'ruby -v'
ansible app -m shell -a 'ruby -v; bundler -v'
ansible-playbook clone.yml
creating 2 images for the app and for DB respectively:
packer validate -var-file=variables.json.example db.json
packer validate -var-file=variables.json.example app.json
packer build -var-file=variables.json.example db.json
packer build -var-file=variables.json.example app.json
write app.tf and db.tf, then split it into modules with their own main.tf, variables.tf and outputs.tf
then run to load modules the following command:
terraform get
run from prod and stage directories to create resources on yc:
terraform apply
terraform init
to initialize terraform repo
terraform plan
to look at what happens when we run main.tf
terraform apply
to apply main.tf and create infrastructure.
terraform show | grep nat_ip_address
to show nat_ip_address from terraform.tfstate file
terraform destroy
to destroy infra
terraform refresh
to update outputs
terraform output external_ip_address_app
shows output with the key ...
how to validate packer template:
packer validate -var-file=variables.json.example ubuntu16.json
how to run packer template with user vars:
packer build -var-file=variables.json ./ubuntu16.json
testapp_IP = 158.160.54.245
testapp_port = 9292
bastion_IP = 51.250.75.7
someinternalhost_IP = 10.128.0.34
VPN server admin panel: 51.250.75.7.sslip.io
Command to log into bastion host:
ssh [email protected]
Command to log into bastion host and enable forwarding of the authentication agent connection:
ssh -A [email protected]
Once we connected to bastion, then we can connect to internal host:
ssh [email protected]
Since using several commands is rather tedious, we can use one of the following tricks to connect to internal host through bastion with only 1 command.
The command below connects via ssh to bastion host and then executes another ssh
command on a remote machine:
ssh -t -A [email protected] 'ssh [email protected]'
this allows us to connect to internal host using only a single command.
Also, there's a more sophisticated trick that we probably do not need, but it does the work as well. The command below creates a tunnel and then connects to internal host via localhost tunnel on 2222 port.
ssh -A -J [email protected] -L 2222:10.128.0.34:22 [email protected] -N -f -q && ssh -p 2222 appuser@localhost
we can assign the command to an alias. this can be done by adding the alias in a bash profile
vi ~/.bash_profile
and then, add the line below and save file:
alias ssh_someinternalhost='ssh -t -A [email protected] "ssh [email protected]"'
then, we perform the command to set the alias:
source ~/.bash_profile
and here we go, we can use this ssh_someinternalhost
command to connect to internal host.
To make it look like normal host and like normal SSH command, we can do the following:
we can add the following line to ~/.bash_profile
:
alias someinternalhost='-t -A [email protected] "ssh [email protected]"'
alias ssh='ssh '
this little trick allows us to use alias as a parameter for ssh. thus, ssh someinternalhost will connect us to someinternalhost, even without establishing a tunnel.
another approach that we could use, is to add creating the tunnel localhost:2222
<-> someinternalhost
, and then modify .ssh/config
like this:
Host someinternalhost
Hostname 127.0.0.1
Port 2222
User appuser
this would also work.
The last possible solution would be the following, we modify .ssh/config
, and then use ProxyCommand
and netcat
combined, to achieve desired behavior:
Host bastion
HostName 51.250.75.7
User appuser
Host someinternalhost
HostName 10.128.0.34
User appuser
ProxyCommand ssh bastion nc -q0 %h 22
ProxyCommand
: specifies a command to use as a proxy to reach the someinternalhost
.
This configuration indicates that the SSH connection should first connect to a bastion
host
and then use the nc
command to forward the connection to the someinternalhost
at port 22.
%h
is a placeholder that will be replaced with the actual hostname of the someinternalhost
.