Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support load balancing with nginx dynamic upstreams #157

Closed
subnetmarco opened this issue Apr 24, 2015 · 97 comments
Closed

Support load balancing with nginx dynamic upstreams #157

subnetmarco opened this issue Apr 24, 2015 · 97 comments
Assignees
Labels
idea/new plugin [legacy] those issues belong to Kong Nation, since GitHub issues are reserved for bug reports.
Milestone

Comments

@subnetmarco
Copy link
Member

Support for dynamic upstreams that will enable dynamic load balancing per API.

 # sample upstream block:
 upstream backend {
    server 127.0.0.1:12354;
    server 127.0.0.1:12355;
    server 127.0.0.1:12356 backup;
}

So we can proxy_pass like:

proxy_pass http://backend;
@subnetmarco subnetmarco self-assigned this Apr 24, 2015
@thibaultcha thibaultcha changed the title Dynamic upstreams Support load balancing with nginx dynamic upstreams Apr 24, 2015
@bobrik
Copy link

bobrik commented May 5, 2015

I'd love to see that too. My use-case is routing requests to dynamic mesos tasks with zoidberg, kong would be a good candidate to do the routing part. I was going to use nginx with 2 ports anyway. Let me know if it makes sense to use kong for this.

Here are the options I see:

  1. New Feature: added ngx_http_lua_upstream_set_peer_addr() openresty/lua-upstream-nginx-module#11 can be used to preallocate list of upstreams and only use some subset of that list
  2. Hi,I pull a request for add_server to upstream,and include testing code openresty/lua-upstream-nginx-module#12 can be used for dynamic allocation of upstreams, but list can only grow, so previous PR is needed as well
  3. https://twitter.com/agentzh/status/580170442150846464 — not public yet, needs manual handling of retry logic, but can replace both previous versions

Reloading nginx is not an option since it triggers graceful restart for all the worker processes. We use that mechanism with haproxy in marathoner, but long-lived sessions force previous instances of haproxy to stay alive for extended periods of time. Nginx is using more processes than haproxy, so it would be even worse. Deploying a lot can cause spinning thousands of proxying processes for no good reason.

@Tenzer
Copy link

Tenzer commented May 5, 2015

Nginx can already do this in the Plus version: http://nginx.com/products/on-the-fly-reconfiguration/

@bobrik
Copy link

bobrik commented May 5, 2015

Yep, that's another option starting at $7000 annually for 5 servers.

@Tenzer
Copy link

Tenzer commented May 5, 2015

You can also buy the license for one server at $1500 per year: http://nginx.com/products/pricing/. I'm not saying it's cheap or anything but it's an alternative if people wasn't aware of it.

@bobrik
Copy link

bobrik commented May 15, 2015

Another option: https://github.com/yzprofile/ngx_http_dyups_module

@bobrik
Copy link

bobrik commented May 18, 2015

Looks like dyups can do the trick: https://github.com/bobrik/zoidberg-nginx. Nginx config and lua scripts could give an idea how it works.

I'm not sure about stability of this thing, though.

@subnetmarco
Copy link
Member Author

Just a quick update on this, this feature is important and we feel like an initial implementation should be done. It's not going to make it into 0.3.0 (currently working on implementing other major features, including SSL support and path-based routing), but it definitely should show up within the next two releases. We are trying to keep the release cycles very short, so it shouldn't take too much time.

Of course pull-requests are also welcome.

@bobrik
Copy link

bobrik commented May 20, 2015

Can you tell me how this is going to be implemented?

On Wednesday, May 20, 2015, Marco Palladino [email protected]
wrote:

Just a quick update on this, this feature is important and we feel like an
initial implementation should be done. It's not going to make it into
0.3.0 (currently working on implementing other major features, including
SSL support and path-based routing), but it definitely should show up
within the next two releases. We are trying to keep the release cycles very
short, so it shouldn't take too much time.


Reply to this email directly or view it on GitHub
#157 (comment).

Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou

@subnetmarco
Copy link
Member Author

@bobrik As you pointed out in one of your links, apparently the creator of OpenResty is building a balancer_by_lua functionality which should do the job - so I am investigating this option.

The alternative is taking one of the existing pull requests on the lua-upstream module and contribute to them to make them acceptable and implement any missing feature we might need.

@bobrik
Copy link

bobrik commented May 21, 2015

@thefosk the only public info about balancer_by_lua is the tweet by @agentzh. Contributing to existing PRs to lua-upstream module also involves his approval :)

Take a look at ngx_http_dyups_module, this stuff reuses logic from nginx, as opposed to balancer_by_lua. It worked for me in my tests without crashes: 1-100 upstreams, updating every second with full gradual upstream list replace, 8k rps on 1 core with literally no lua code execution when serving user requests. Not sure about keepalive to upstreams, https and tcp upstreams, though.

@subnetmarco
Copy link
Member Author

@bobrik yes, we will start working on this feature in the next releases, so we will monitor any announcement about balancer_by_lua in the meanwhile. If balancer_by_lua won't be released publicly during this time, then we will need to find another solution.

The requirement for Kong would be to dynamically create an upstream configuration from Lua, then dynamically populate the upstream object with servers and use it in the proxy_pass directive. Do you think we can invoke ngx_http_dyups_module functions directly from Lua bypassing its RESTful API?

The use case wouldn't be to update an existing upstream configuration, but to create a brand new from scratch, in pseudo-code:

set $upstream nil;
access_by_lua '
  local upstream = upstream:new()
  upstream.add_server("backend1.example.com", { weight = 5 })
  upstream.add_server("backend2.example.com:8080", { fail_timeout = 5, slow_start = 30 })
  ngx.var.upstream = upstream
';
proxy_pass http://$upstream

@bobrik
Copy link

bobrik commented May 22, 2015

@thefosk I create upstreams on the fly and update them on the fly with ngx_http_dyups_module. Moreover, I do it from lua code in RESTful API.

Take a look:

https://github.com/bobrik/zoidberg-nginx/blob/master/nginx.conf
https://github.com/bobrik/zoidberg-nginx/blob/master/zoidberg-state-handler.lua
https://github.com/bobrik/zoidberg-nginx/blob/master/zoidberg-proxy-rewrite.lua

Your pseudo-code implies that you create upstream on every request, I only do that on every upstream update. In zoidberg-nginx there is also some code for checking if upstream exists to prevent endless loops, but I found out that it is avoidable with this trick:

        location / {
            set $where where-am-i.zoidberg;
            proxy_pass http://$where;
        }

Upstream where-am-i.zoidberg is created on the fly and it's not a real domain name, no recurse proxying to itself until worker_connections are exhausted happens.

@subnetmarco
Copy link
Member Author

@bobrik thank you, I will look into this

@Gingonic
Copy link
Contributor

I'm also looking at an API manager that makes sense for mesos/marathon. After spending much time with my friend Google I came to the conclusion that right now there is only one option available : choose a service discovery (consul, haproxy brige, zoidberg,...) and add an API Proxy on top of it (Kong, Repose, Tyk, ApiAxle, WSO2 AM, etc.).
Frankly I don't see why I should put a proxy on front of a proxy. It would make a lot of sense to have a lightweight API manager+service discovery service in one piece of middleware. So +1 for this feature. What is the planing state?

@krishnaarava
Copy link

+1 for this feature

@ngbinh
Copy link

ngbinh commented Jul 24, 2015

same here 👍

@neilalbrock
Copy link

Another +1 for me good sir

@agentzh
Copy link

agentzh commented Aug 4, 2015

The balancer_by_lua* directives from ngx_lua will get opensourced soon, in the next 3 months or so.

@subnetmarco
Copy link
Member Author

@agentzh very good news, looking forward to trying it

@jdrake
Copy link

jdrake commented Aug 7, 2015

@thefosk +1

@sonicaghi
Copy link
Member

+1

@bobrik
Copy link

bobrik commented Aug 11, 2015

@agentzh any chance to get TCP support in balancer_by_lua as well?

@agentzh
Copy link

agentzh commented Aug 12, 2015

@bobrik I'm not sure I understand that question. Are you talking about doing cosockets in the Lua code run by balancer_by_lua or you mean using balancer_by_lua in stream {} configuration blocks instead of http {} blocks?

@Tieske Tieske self-assigned this Aug 8, 2016
@andy-zhangtao
Copy link

@Tieske Hi Tieske, what is Kong mgt API? And how to use POST /upstreams ? I didn't find reference in Kong document(v0.8) . I also need this feature, but I don't know how to do. :-(

@subnetmarco
Copy link
Member Author

@andy-zhangtao this feature is currently being built. We are aiming to release it in the 0.10 version.

@andy-zhangtao
Copy link

@Tieske Got it! Thanks Tieske

@merlindorin
Copy link

@thefosk awesome!

We dream to build microservice that can be autoregistered against kong <3

@subnetmarco
Copy link
Member Author

We dream to build microservice that can be autoregistered against kong <3

@iam-merlin this is exactly our vision. The next couple of releases will be very exciting for what it concerns microservices orchestration.

@merlindorin
Copy link

btw, @Tieske , if you want a beta tester or a feedback, don't hesitate to ping me. I've some code ready for that :D (I started to write an autoregister library... before I found that kong does not have this feature yet :'( )

@alza-bitz
Copy link

We dream to build microservice that can be autoregistered against kong <3

@iam-merlin @thefosk how about a Kong adapter for registrator? This could register Docker-based services with Kong, if the service has the environment SERVICE_TAGS=kong for example.

n.b. this wouldn't replace other registrator adapters e.g. the Consul adapter, it would compliment them (i.e. could be used in combination, or just use the Kong adapter on it's own).

I am planning to deploy Kong for a work project when it supports SRV records fully, in the meantime I have a nginx Docker container which dynamically adds upstreams based on SERVICE_TAGS=nginx.

@merlindorin
Copy link

@alzadude , I don't know registrator but from my point of view (and what I read), registrator needs to be ran on each host... with a feature like this issue, you don't need docker (and consul or other service registry), just a Plain Old Request (^^) and maybe we will have health check after (in another plugin).

From my point of view, Consul is very heavy for microservice... I really like the simplicity with Kong (and no dependencies) but right now... doing a microservice architecture with Kong and without a service registry is a pain.

Anyway, I don't think my point of view is relevant... I'm just a user of kong, not the best one and not a lua developer :P xD.

@subnetmarco
Copy link
Member Author

subnetmarco commented Aug 19, 2016

Just so you know, once @Tieske finishes this implementation, I will provide some sample Docker-Compose templates that show how to use Kong with a microservice orchestration pattern.

I do personally like ContainerPilot more than registrator though.

We will also be able to provide a pattern that doesn't involve having any third-party service discovery in place because, effectively, Kong's new /upstreams will become the service discovery layer.

@Tieske
Copy link
Member

Tieske commented Aug 26, 2016

A PR with intermediate status is available now (see #1541)

Any input and testing is highly appreciated, see #1541 (comment)

so if anyone else wants to test like @iam-merlin, please check it out.

@Tieske Tieske mentioned this issue Aug 26, 2016
10 tasks
@merlindorin
Copy link

@Tieske it works xD

I've made some tests and it seems to works as expected (just the dns part, I didn't test yet balancer_by_lua).

It seems dns is missing in your requirement (I'm not a lua dev) and I've to installed it manually (I got on error at the first start and I install https://github.com/Mashape/dns.lua).

Do you have any documentation about your code or this will be ready later?

@sonicaghi sonicaghi added this to the 0.10 milestone Aug 30, 2016
@Tieske
Copy link
Member

Tieske commented Sep 1, 2016

@iam-merlin thx for testing 👍. Docs will be later, as it might still change.

@Tieske Tieske mentioned this issue Oct 11, 2016
7 tasks
@Tieske
Copy link
Member

Tieske commented Oct 11, 2016

besides #1541 (internal dns) there is now #1735 which implements the upstreams feature discussed above (1735 builds on top of 1541).

testing is once again highly appreciated!

@tomdavidson
Copy link

Cant not deploy load balancing with round robin. We need least open connections or fastest response time - anything but round robin.

Tieske added a commit that referenced this issue Dec 28, 2016
* adds loadbalancing on specified targets
* adds service registry
* implements #157 
* adds entities: upstreams and targets
* modifies timestamps to millisecond precision (except for the non-related tables when using postgres)
* adds collecting health-data on a per-request basis (unused for now)
@Tieske
Copy link
Member

Tieske commented Dec 29, 2016

'least open connections' does not make sense in a Kong cluster. 'response time' is being considered, but not prioritized yet.

@Tieske
Copy link
Member

Tieske commented Dec 29, 2016

closing this as #1735 has been merged into the next branch for the upcoming release.

@Tieske Tieske closed this as completed Dec 29, 2016
thibaultcha pushed a commit that referenced this issue Jan 12, 2017
* adds loadbalancing on specified targets
* adds service registry
* implements #157
* adds entities: upstreams and targets
* modifies timestamps to millisecond precision (except for the non-related tables when using postgres)
* adds collecting health-data on a per-request basis (unused for now)
@avanathan
Copy link

How to implement keep_alive without upstream? Because upstream doesn't support dynamic IP resolution and we cannot use keep_alive outside of upstream.

hutchic pushed a commit that referenced this issue Jun 10, 2022
### Summary

#### libyaml 0.2.2 release

- #95 -- build: do not install config.h
- #97 -- appveyor.yml: fix Release build
- #103 -- Remove unused code in yaml_document_delete
- #104 -- Allow colons in plain scalars inside flow collections
- #109 -- Fix comparison in tests/run-emitter.c
- #117 -- Fix typo error
- #119 -- The closing single quote needs to be indented...
- #121 -- fix token name typos in comments
- #122 -- Revert removing of open_ended after top level plain scalar
- #125 -- Cherry-picks from PR 27
- #135 -- Windows/C89 compatibility
- #136 -- allow override of Windows static lib name

#### libyaml 0.2.3 release

- #130 Fixed typo.
- #144 Fix typo in comment
- #140 Use pointer to const for strings that aren't/shouldn't be modified
- #128 Squash a couple of warnings in example-deconstructor-alt
- #151 Fix spelling for error message
- #161 Make appveyor config be a hidden file
- #159 Add CHANGES file
- #160 Always output document end before directive (YAML 1.2 compatibility)
- #162 Output document end marker after open ended scalars
- #157 change cmake target name from libOFF.a to libyaml.a
- #155 include/yaml.h: fix comments
- #169 Fixed missing token in example
- #127 Avoid recursion in the document loader.
- #172 Support %YAML 1.2 directives
- #66 Change dllexport controlling macro to use _WIN32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
idea/new plugin [legacy] those issues belong to Kong Nation, since GitHub issues are reserved for bug reports.
Projects
None yet
Development

No branches or pull requests