Skip to content
Bryan Kendall edited this page Feb 2, 2015 · 3 revisions

Routing with Hipache has had some very weird things done to it, so I want give a 'short' explanation on how it works.

Redis (for Hipache)

Hipache uses Redis for it's directing. In normal Hipache, if a request comes in on foo.bar.com, it looks into Redis for frontend:foo.bar.com, pipes the request to and from the value it finds in that key (e.g. http://internal.foo.bar.com:5000). Hipache can also listen on port 443 and receive a request on foo.bar.com, and do the same logic, piping the request to http://internal.foo.bar.com:5000, which didn't have to support HTTPS.

API -> Redis (for Hipache)

When API server starts a container, it inspects it to get the port mappings. In order to map multiple ports with Hipache, it creates entries for each one. In our case, we are mapping container ports (which are mapped randomly) to the exposed ports. For example, you may have a box called foo in the org bar serving http on port 5000, which when run in a Docker container gets mapped to port 49015. API creates an entry in Redis that says frontend:5000.foo.bar.runnable.io -> http://10.0.0.1:49015.

When you serve HTTPS on port 443 (which Docker maps to 49016), API notices that 443 is the exposed port, and creates an entry in Redis frontend:443.foo.bar.runnable.io -> https://10.0.0.1:49016 (note the s).

The pattern here is that API creates (in Redis) frontend:[exposedPort].foo.bar.runnable.io -> [exposedPort==='443'?'https':'http']://10.0.0.1:[dockerMappedPort]

Hipache Routing

We have Hipache set up on ports 80 and 443. All other ports >= 81 and != 443 get forwarded to port 80, but the TCP packet still maintains what port it originally came in on. E.g., Hipache may get a request with the host header set to foo.bar.runnable.io:5000.

Hipache receives this request, and attempts to split off the port from the host. If it finds the port, it places it at the front of the hostname, and looks for an address to forward to in Redis. In this example, it looks for frontend:5000.foo.bar.runnable.io, sees it mapped to http://10.0.0.1:49015, and forwards the request (over plain old HTTP) to that address.

However, if Hipache receives the request on port 443, it wants to make the assumption that the port it's forwarding to is going to be serving HTTPS traffic. Therefore, when it receives a request, it puts 443 at the beginning of the domain, and looks for the key. With the API, we places the https: address in Redis, so Hipache continues the HTTPS request and returns the data.

Why only HTTPS on 443?

If we wanted to support a "free SSL" model, where you could put https: in front of your URLs and have your traffic returned over HTTPS, then we would have to take a guess at what you are serving on your port in the container. For example, if I wanted to set up an HTTPS server on port 5000, API has no idea that the traffic will require SSL, so Hipache will ultimately make a request over HTTP and fail. Making these guesses would be difficult, and frankly, it s a strange practice to support HTTPS over non-standard ports.

By limiting HTTPS support on foo.bar.runnable.io domains to only port 443, we can safely make the (by port standards) assumption that the application is accepting HTTPS traffic, and make the request with HTTPS. While we are eliminating the "free ssl" option (for now, on containers), it allows us to make a much better assumption about the HTTPS requests we receive.

What about x.runnable.io

The logic that allows foo.bar.runnable.io to be port-mapped to kingdom-come is separate from the logic for one subdomain. Our own services we assume only are on one port, which allows us to set Hipache (through Redis) to redirect to whatever port and protocol we deem worthy.