-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bucc vm does not survive a restart #214
Comments
i only know of this problem in combination with virtualbox cpi if the disk still exists thats in the state file |
Yes it is reproducible. A bucc up doesn't help because it detects no change and will not act. bucc up --recreate will recreate the vm and everything works fine again. |
This issue also occurs on vSphere. On reboot, /var/vcap/store and /var/vcap/data are not mounted. Workaround: execute bucc up with the --recreate flag. |
I think we have the same problem with bucc up --lite --cpi=docker-desktop. When i restart the bosh instance in docker.. all the https request does not work |
this is a cpi issues unfortunately. nothing much we can do about it from a bucc perspective. |
@ramonskie can you explain this in more detail? If I got you correctly, this issue occurs in at least the openstack, docker, vsphere and virtualbox cpi. |
i have not seen this issue occurring in vsphere see this long standing open issue cloudfoundry/bosh-virtualbox-cpi-release#7 |
Well, we are facing this issue with the vSphere CPI and @damzog, who opened this issue, uses the openstack cpi. That's why I'm asking. For me it sounds like it's not only a bug with the docker/virtualbox CPI but with some other component. :( |
is it reproducible? |
Yes, I can reproduce this behaviour, just did it. We noticed this issue while performing some failover tests (e.g., vSphere HA moving and restarting the VM) but it's also reproducable by simply rebooting the bucc VM via vSphere GUI or by using |
Hi,
we still use 0.92 on openstack. I observed that after a restart of the bucc vm e.g. be bucc ssh -> shutdown -r now the vm is not coming up again: It is rebooted but no monit process running, the persistent disk seems not to be mounted properly, see below. Any ideas? Is it a stemcell problem?
The text was updated successfully, but these errors were encountered: