Skip to content
This repository has been archived by the owner on Feb 11, 2022. It is now read-only.

rsync phase failing #83

Open
SebTardif opened this issue Jun 7, 2013 · 12 comments
Open

rsync phase failing #83

SebTardif opened this issue Jun 7, 2013 · 12 comments

Comments

@SebTardif
Copy link

I uses aws.ami = "ami-c53fb7ac" #CentOS 6.3 EC2 AMI images 64 bit EBS ec2-user

Output:
vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
[default] Warning! The AWS provider doesn't support any of the Vagrant
high-level network configurations (config.vm.network). They
will be silently ignored.
[default] Launching an instance with the following settings...
[default] -- Type: m1.small
[default] -- AMI: ami-c53fb7ac
[default] -- Region: us-east-1
[default] -- Keypair: mygroup-key
[default] -- Security Groups: ["mygroup-dev"]
[default] Waiting for instance to become "ready"...
[default] Waiting for SSH to become available...
[default] Machine is booted and ready for use!
[default] Rsyncing folder: /home/stardif/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mkdir -p '/vagrant'

But doing the following command is able to create the folder:
vagrant ssh-config > ssh-config.config ; ssh -t -F ssh-config.config default sudo mkdir /vagrant

So the plug-in should do the same.

@SebTardif
Copy link
Author

Because reload / suspend / halt and other command are not supported by this plug-in, I cannot "fix" the issue by ssh commands on the guess, then have vagrant try again.

@kmpeterson
Copy link

This is a pain!

But look at issue #70 - there's a clue for a workaround there.

I added the following to my Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\n"

What I've found so far: sometimes, the mkdir still fails (sometimes), but running vagrant provision then does work.

Yes, I'd really like to have this fixed, permanently and properly, but this got me going today, at least (and thanks to stanlemon for the hint).

@gschueler
Copy link

I think this is the same issue as #72

@kmpeterson
Copy link

I agree. I think we're all dancing around the same thing. The bottom line is that the AWS provider doesn't work with "stock" Amazon Linux AMIs (nor CentOS); so whether or not something can be done with the provider itself would point the way to a solution.

@rixrix
Copy link

rixrix commented Jun 27, 2013

@kmpeterson that clue you mentioned did help me out while struggling to make Centos 6.4 working on Amazon. I went on a different way instead.

  1. Launched official Centos AMI
  2. Installed required tools, just puppet
  3. Added sudoers entry for root user with tty set to !requiretty. See content below for my sudoers entry /etc/sudoers.d/vagrant-init-requiretty

Defaults:root !requiretty

  1. Create custom AMI from the modified VM
  2. Fire up Vagrant from local machine BUT using the new custom AMI ID

seems to work on my end

@kmpeterson
Copy link

That also makes good sense... thanks!

I didn't go that route because I always wanted to start with a "fresh" AMI (I was following an example in the docs), but clearly that's a good overview of using a custom AMI which I'm going to have to get around to anyway at some point. I appreciate the mention!

@miguno
Copy link

miguno commented Jul 2, 2013

See my comment here: #72 (comment)

I am using a boothook instead of cloud-config to fix this issue (the former will be triggered before Vagrant's rsync phase). So far all my test runs worked reliably.

@michael-harrison
Copy link

I ran into this issue as well. The issue is caused by the requiretty setting enforced in /etc/sudoers on Amazon AMIs. Here's what I ended up doing for provisioning CentOS:

Do the initial vagrant up

vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...

....

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!   

mkdir -p '/vagrant'

Disable the 'requiretty'

vagrant ssh
# sudo sed -i -e 's/^Defaults.*requiretty/# Defaults requiretty/g' /etc/sudoers
# exit

Recommence the provisioning

vagrant provision

I believe the real solution is for Vagrant to have a config.ssh option to force pseudo-tty (e.g. ssh -t) negating the need for modifying /etc/sudoers

@tylerwalts
Copy link

+1 to the approach described by rixrix above - it worked for me using latest CentOS 6.4 release.

As an operational pattern, each time a new OS is released, do:

  • Take stock AMI (From official CentOS or Amazon release)
  • Add ssh fix + [puppt | chef | your_provisioning_tool ]
  • Create new AMI based on instance
  • Destroy instance and use new AMI going forward.

This workflow seems to fit well with what Packer is advertised to do, so I am wondering if it can automate the above steps, or if it will also have the same ssh issue as Vagrant... anyone tried?

Edit: This approach assumes we need to use the current Vagrant today, and this workaround will no longer be needed if one of the more long-term approaches is implemented. For example, one described by miguno or michael-harrison.

@miguno
Copy link

miguno commented Aug 14, 2013

As a follow-up: I am currently also using customized (read: patched) AMI's based on the stock Amazon Linux AMI. That's the only reliable workaround I have found so far. Still I hope that a proper long-term fix can be made to Vagrant / vagrant-aws that fixes the root cause of this problem so that we can go back to using stock Amazon Linux AMI's.

@mahmoudimus
Copy link

+1

@pidah
Copy link
Contributor

pidah commented Feb 9, 2014

This issue has been fixed here hashicorp/vagrant#1482

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants