-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
packer files for load balancer #36
Conversation
}, | ||
"post-processors": [{ | ||
"type": "atlas", | ||
"token": "", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be removed? or reference ENV?
@@ -14,4 +14,5 @@ main() { | |||
register_service "$node" docker | |||
register_service "$node" consul | |||
register_service "$node" dnsmasq | |||
register_service "$node" haproxy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure we want haproxy running on every slave ?
Could this be hooked up to the wercker build (non amazon)? |
This needs to be rebased on master after #45 |
user = "ubuntu" | ||
key_file = "${var.key_file}" | ||
host = "${aws_instance.loadbalancer.public_ip}" | ||
script_path = "/tmp/${element(aws_instance.loadbalancer.*.id, count.index)}.sh" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
now 0.4.0 is out these can use ${self.id}
Needs a big refactor / rebase against master. |
Outstanding tasks here -
|
I would be in favour of ditching this PR and replacing it with a docker container for haproxy/consul-template. This would be a bit more lightweight and has the benefit we could run it on a stock ubuntu (or other) AMI / droplet / whatever. This looks promising - or we could roll our own opinionated container borrowing some concepts from here too https://github.com/factorish/proxy |
I find it kind of uncommon as we would have a mono-container dedicated vm but I am in favour too for the sake of portability and reusability across clouds/operating systems. Also this would allow us to test/switch between load-balancer approaches like the two links you suggest fairly easily |
it would not necessarily have to sit on its own in a VM (a mono-container as you say). in fact it could sit anywhere. I think it might make sense to provision it on a separate instance though at least in the AWS setup for simplicity (at first anyway), but it could easily go on the NAT server (for example). Maybe it would be easier to just plonk it on the NAT server. we'd still need to think about what we do for vagrant/digitalocean though. Plus once we have #55 in (it wont just be on its own), plus we may add other services there for monitoring/logging and other stuff if we desire. |
Going to close this in favour of #112 . Re-open if you have a burning desire to resurrect. |
yeh im in favour of it going in a container. Could just setup the NAT to forward external requests to the loadbalanced haproxy container. It was in another one of the terraform examples |
if you've got a link, drop it in #112 |
No description provided.