-
Notifications
You must be signed in to change notification settings - Fork 573
rsync phase failing #83
Comments
Because reload / suspend / halt and other command are not supported by this plug-in, I cannot "fix" the issue by ssh commands on the guess, then have vagrant try again. |
This is a pain! But look at issue #70 - there's a clue for a workaround there. I added the following to my Vagrantfile: What I've found so far: sometimes, the mkdir still fails (sometimes), but running vagrant provision then does work. Yes, I'd really like to have this fixed, permanently and properly, but this got me going today, at least (and thanks to stanlemon for the hint). |
I think this is the same issue as #72 |
I agree. I think we're all dancing around the same thing. The bottom line is that the AWS provider doesn't work with "stock" Amazon Linux AMIs (nor CentOS); so whether or not something can be done with the provider itself would point the way to a solution. |
@kmpeterson that clue you mentioned did help me out while struggling to make Centos 6.4 working on Amazon. I went on a different way instead.
Defaults:root !requiretty
seems to work on my end |
That also makes good sense... thanks! I didn't go that route because I always wanted to start with a "fresh" AMI (I was following an example in the docs), but clearly that's a good overview of using a custom AMI which I'm going to have to get around to anyway at some point. I appreciate the mention! |
See my comment here: #72 (comment) I am using a |
I ran into this issue as well. The issue is caused by the requiretty setting enforced in /etc/sudoers on Amazon AMIs. Here's what I ended up doing for provisioning CentOS: Do the initial vagrant up
Disable the 'requiretty'
Recommence the provisioning
I believe the real solution is for Vagrant to have a config.ssh option to force pseudo-tty (e.g. ssh -t) negating the need for modifying /etc/sudoers |
+1 to the approach described by rixrix above - it worked for me using latest CentOS 6.4 release. As an operational pattern, each time a new OS is released, do:
This workflow seems to fit well with what Packer is advertised to do, so I am wondering if it can automate the above steps, or if it will also have the same ssh issue as Vagrant... anyone tried? Edit: This approach assumes we need to use the current Vagrant today, and this workaround will no longer be needed if one of the more long-term approaches is implemented. For example, one described by miguno or michael-harrison. |
As a follow-up: I am currently also using customized (read: patched) AMI's based on the stock Amazon Linux AMI. That's the only reliable workaround I have found so far. Still I hope that a proper long-term fix can be made to Vagrant / vagrant-aws that fixes the root cause of this problem so that we can go back to using stock Amazon Linux AMI's. |
+1 |
This issue has been fixed here hashicorp/vagrant#1482 |
I uses aws.ami = "ami-c53fb7ac" #CentOS 6.3 EC2 AMI images 64 bit EBS ec2-user
Output:
vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
[default] Warning! The AWS provider doesn't support any of the Vagrant
high-level network configurations (
config.vm.network
). Theywill be silently ignored.
[default] Launching an instance with the following settings...
[default] -- Type: m1.small
[default] -- AMI: ami-c53fb7ac
[default] -- Region: us-east-1
[default] -- Keypair: mygroup-key
[default] -- Security Groups: ["mygroup-dev"]
[default] Waiting for instance to become "ready"...
[default] Waiting for SSH to become available...
[default] Machine is booted and ready for use!
[default] Rsyncing folder: /home/stardif/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
But doing the following command is able to create the folder:
vagrant ssh-config > ssh-config.config ; ssh -t -F ssh-config.config default sudo mkdir /vagrant
So the plug-in should do the same.
The text was updated successfully, but these errors were encountered: