- DD -> Dynamic Docker
- RPS -> Reverse Proxy Server
docker run -it -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro mkodockx/docker-nginx-proxy:stable
docker run -it -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro mkodockx/docker-nginx-proxy
A nginx based reverse-proxy with dynamic server configuration at runtime. Automatically generating configuration data triggered by Docker events.
Easily customizable for specific use cases. Multilevel stacking of instances is possible to provide scalability support.
You can create and scale the backend via (beta) Docker mechanisms: Use the docker-compose scale option or operate with docker swarm.
It provides/uses nginx limiting capabilities with different detection methods by default (connections per ip, burst requests/s).
Jason Wilder created nginx-proxy for docker based on his docker-gen and nginx created by Igor Sysoev.
This fork is some kind of refactoring and extension of Jason's primarily work. So I modified a lot but I didn't change the core functionality itself. Modifications are triggered by division and extension to be able to meet my demands.
Many thanks to Jason for his great work!
See also the readme at Jason's github repo for more information about docker-gen and look at his site for information about templates.
Topics covered there among others:
-
Multi Port
-
Multi Hosts
-
Wildcard Hosts
-
SSL Config
-
Basic Auth
-
Custom global or per vhost nginx configuration
You find a copy of Jason's readme at the end of this one. It's just to easyily provide the version of his readme according to my fork.
To make Jason's nginx-proxy more configurable for me without having to provide custom configs/includes, I added some environment variables. (I'm sorry Jason, I know you try to minimize amount of environment variables)
All of them change the global behavior of nginx. (starting with "GLOB_")
Using this image you can easily modify relevant values, configs or features via environment variables.
For example:
- Caching/Proxy-Caching
- SSL: Bundled Certs / CA chains
- SSL: OCSP
- automatic redirects
- CORS support
- nginx worker config
Optional: Easy using of optimisation features of nginx as they are provided in general but must be activated via environment variables.
Further I added connection/IP based simple handling of request amount peaks.
See details [below](# Global Environment Variables)
Change global https enforcing policy for a single container.
Default: value of [GLOB_SSL_FORCE](## GLOB_SSL_FORCE) - which defaults to "1"
VIRTUAL_SSL_FORCE="<# vals "0" or "1" #>"
docker run -v /var/local/static/images:/var/www/images \
-e VIRTUAL_HOST=example.org \
-e VIRTUAL_SSL_FORCE="0" \
... target/image
Allow additional origins to support CORS.
VIRTUAL_ORIGINS="<proto>://<domain>.<tld>"
docker run -v /var/local/static/images:/var/www/images \
-e VIRTUAL_HOST=example.org \
-e VIRTUAL_ORIGINS=cdn.org \
... target/image
docker run -v /var/local/static/images:/var/www/images \
-e VIRTUAL_HOST=example.org \
-e VIRTUAL_ORIGINS="*" \
... target/image
Last but not least I tried to improve readability and documentation for easier understanding of the image's working principles. From my point of view. Therefore this repo contains an additional nginx-dev template with a lot of documentation. That's just for my reference, you may have a look and try it yourself.
The image offers a bunch of environment variables allowing you easy customization and optimization even for more complex features.
The user which will run the proxy server.
Set the nginx maximum body size.
See nginx docs
Enables the multi staging proxy caching to increase performance and reduce latency. Active by default.
Set the insertion string inserted between cert file's name and cert file extension.Ignored if GLOB_SSL_CERT_BUNDLE_ENABLED is false
Change ssl session timeout.
See nginx ssl docs
Modify size of shared ssl session cache.
See nginx ssl docs
Enable Online Certificate Status Protocol (OCSP) through setting this value to any != 0. If 0, GLOB_SSL_OCSP_DNS_ADDRESSES and GLOB_SSL_OCSP_DNS_TIMEOUT are ignored.
See nginx stapling docs for details
DNS servers (2) to resolve the certificate verification from. Unused if GLOB_SSL_OCSP_VALID_TIME = 0. Uses OpenNicProject DNS server and one provided by google. You can use google one's only as well with setting to e.g. "8.8.4.4 8.8.8.8".
See nginx resolver docs for details
Timeout for name resolution. Unused if GLOB_SSL_OCSP_VALID_TIME = 0.
See nginx resolver docs for details
Redirect calls via http to https.
Defines custom ssl redirect port.
To set the default host for nginx use the env var GLOB_DEFAULT_HOST=foo.bar.com
.
Enable high speed ssl spdy protocol for supporting clients.
See nginx spdy docs
You may want to return another status code in case of not matching server requests.
To easily tell the proxy to redirect requests from a prefixed domain to the none prefixed one and vice versa.
Provide a custom prefix to include with auto redirect. Ignored if GLOB_AUTO_REDIRECT_WITH_PREFIX_ENABLED is false use-cases:
- www.domain.org -> domain.org
- domain.org -> www.domain.org
- api.domain.org -> domain.org
- domain.org -> cdn.domain.org
To control source and destination of auto redirect. Ignored if GLOB_AUTO_REDIRECT_WITH_PREFIX_ENABLED is not enabled.
- 0: redirect from prefix to non-prefix
- 1: redirect from non-prefix to prefix
###Info As HTTPS/SSL is enabled by default, this flag allows acces via HTTP as well.
Set the maximum amount of concurrent worker processes of the server.
Specifies the total count of concurrent simultaneous connection for each worker.
Allows the workers to handle multiple connections at once. Set to 'off' if you want the worker to always handle only one connection at a time.
Defines the possible highest number of file handles a worker can hold concurrently.
Sets the loglevel for the error log output to print. Choose from: crit, error, warn, info, debug
The time a worker will hold a connection to a client without a request in seconds.
The number of connections to the upstream backend services that will be kept idle at maximum from nginx. It's turned off by default but with setting a value like 20 - there are always some idle connections available. This reduces the amount of HTTP/TCP connections that need to be created from scratch. This avoids the so called HTTP Heavy Lifting
By default the nginx configuration file is generated from run-template. For development issues you can set this variable to 'dev' to have a more readable template to work on. Logical both the run and the dev template are doing the same.
The version of docker-gen to use.
The docker host this image is working with/to. You can run a docker container only for the proxies to manage another docker container running the applications.
Define the maximum amount of connections per IP to allowed. If limit is exceeded, server will not respond to this IP for next 5 minutes. Protects the proxy from DOS via connection overflow. You may need to exceed that for bigger applications like JIRA.
Define the peak amount of requests per connection allowed. If a client exceeds that, no more requests will be forwardded. You may need to exceed that for bigger applications like JIRA.
Defines a timeout for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response. If the proxied server does not transmit anything within this time, the connection is closed.
docker run -d --name "mgt-op" \
-p 80:80 -p 443:443 \
-e GLOB_MAX_BODY_SIZE="1g" \
-e GLOB_SSL_SESSION_TIMEOUT="10m" \
-e GLOB_SSL_SESSION_CACHE="100m" \
-e GLOB_SSL_CERT_BUNDLE_INFIX=".chained" \
-e GLOB_ALLOW_HTTP_FALLBACK="1" \
-e GLOB_HTTPS_FORCE="0"
-e GLOB_SPDY_ENABLED="1" \
-e GLOB_HTTP_NO_SERVICE="404" \
-e GLOB_AUTO_REDIRECT_WITH_PREFIX_ENABLED="1" \
-e GLOB_AUTO_REDIRECT_PREFIX="subservice" \
-e AUTO_REDIRECT_DIRECTION="1" \
-v /etc/certs:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro mkodockx/docker-nginx-proxy
Further information about several settings, reasons and the consequences.
Just short explanations.
It depends somehow on the use-case and the used kernel. But in general the sendfile implementation just 'pipes'(not exactly) information from one file descriptor(FD) to another within the kernel. So we have a decent direct disk I/O.
By default sendfile is off. That causes the kernel to first read data from one FD, then writes it to the target FD.
time of (read() + write()) > time of (sendfile())
The TCP stack has a mechanism implemented to avoid sending of too small packets. This mechanism will wait for a certain time to guarantee a packet is filled. The UNIX implementation is about 200ms. This mechanism is called Nagle’s algorithm. That was defined 1984.
Today most of the data sent within a request/response exceeds the limit of one frame. But it's impossible to eactly fill it to the limit. So one can guess about 90% of traffic we have useless 200ms waiting for each packet.
Activating this option will add the TCP_NODELAY option on the current connection's TCP stack.
As with tcp_nodelay will reduce waiting time, tcp_nopush tries to reduce the data size transmitted. As only FreeBSD is implementing TCP_NOPUSH in the TCP stack nginx will activate the TCP_CORK option on Linux.
Just like a real cork it blocks the outgoing packet until it reaches the critical mass to be worth transferring.
The mechanism is pretty well documented in the Linux kernel source code.
I leave to you now to combine the three effects and realize their benefits.
If you don't get it, Frederic made a pretty good post(original (french)/english ) according to this topic.
nginx-proxy sets up a container running nginx and docker-gen. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
See Automated Nginx Reverse Proxy for Docker for why you might want to use this.
The used port to redirect to is the host port which is bound to 443. You can also set a custom redirect port with the environment variable HTTPS_REDIRECT_PORT.
$ docker run -d -p 8080:80 -p 8443:443 mazelab/nginx-proxy
# -> queries to 8080 will be redirected to 8443
$ docker run -d -p 8080:80 -p 8443:443 -e HTTPS_REDIRECT_PORT=443 mazelab/nginx-proxy
# -> queries to 8080 will be redirected to 443
To run it:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Then start any containers you want proxied with an env var VIRTUAL_HOST=subdomain.youdomain.com
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
Provided your DNS is setup to forward foo.bar.com to the a host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
If you need to support multiple virtual hosts for a container, you can separate each entry with commas. For example, foo.bar.com,baz.bar.com,bar.com
and each host will be setup the same.
You can also use wildcards at the beginning and the end of host name, like *.bar.com
or foo.bar.*
. Or even a regular expression, which can be very useful in conjunction with a wildcard DNS service like xip.io, using ~^foo\.bar\..*\.xip\.io
will match foo.bar.127.0.0.1.xip.io
, foo.bar.10.0.2.2.xip.io
and all other given IPs. More information about this topic can be found in the nginx documentation about server_names
.
If you would like to connect to your backend using HTTPS instead of HTTP, set VIRTUAL_PROTO=https
on the backend container.
To set the default host for nginx use the env var DEFAULT_HOST=foo.bar.com
for example
$ docker run -d -p 80:80 -e DEFAULT_HOST=foo.bar.com -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
nginx-proxy can also be run as two separate containers using the jwilder/docker-gen image and the official nginx image.
You may want to do this to prevent having the docker socket bound to a publicly exposed container service.
To run nginx proxy as a separate container you'll need to have nginx.tmpl on your host system.
First start nginx with a volume:
$ docker run -d -p 80:80 --name nginx -v /tmp/nginx:/etc/nginx/conf.d -t nginx
Then start the docker-gen container with the shared volume and template:
$ docker run --volumes-from nginx \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
-v $(pwd):/etc/docker-gen/templates \
-t jwilder/docker-gen -notify-sighup nginx -watch -only-exposed /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
Finally, start your containers with VIRTUAL_HOST
environment variables.
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
SSL is supported using single host, wildcard and SNI certificates using naming conventions for certificates or optionally specifying a cert name (for SNI) as an environment variable.
To enable SSL:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/certs:/etc/nginx/certs -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
The contents of /path/to/certs
should contain the certificates and private keys for any virtual
hosts in use. The certificate and keys should be named after the virtual host with a .crt
and
.key
extension. For example, a container with VIRTUAL_HOST=foo.bar.com
should have a
foo.bar.com.crt
and foo.bar.com.key
file in the certs directory.
If you have Diffie-Hellman groups enabled, the files should be named after the virtual host with a
dhparam
suffix and .pem
extension. For example, a container with VIRTUAL_HOST=foo.bar.com
should have a foo.bar.com.dhparam.pem
file in the certs directory.
Wildcard certificates and keys should be name after the domain name with a .crt
and .key
extension.
For example VIRTUAL_HOST=foo.bar.com
would use cert name bar.com.crt
and bar.com.key
.
If your certificate(s) supports multiple domain names, you can start a container with CERT_NAME=<name>
to identify the certificate to be used. For example, a certificate for *.foo.com
and *.bar.com
could be named shared.crt
and shared.key
. A container running with VIRTUAL_HOST=foo.bar.com
and CERT_NAME=shared
will then use this shared cert.
The SSL cipher configuration is based on mozilla nginx intermediate profile which should provide compatibility with clients back to Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7. The configuration also enables HSTS, and SSL session caches.
The behavior for the proxy when port 80 and 443 are exposed is as follows:
- If a container has a usable cert, port 80 will redirect to 443 for that container so that HTTPS is always preferred when available.
- If the container does not have a usable cert, a 503 will be returned.
Note that in the latter case, a browser may get an connection error as no certificate is available
to establish a connection. A self-signed or generic cert named default.crt
and default.key
will allow a client browser to make a SSL connection (likely w/ a warning) and subsequently receive
a 503.
In order to be able to securize your virtual host, you have to create a file named as its equivalent VIRTUAL_HOST variable on directory /etc/nginx/htpasswd/$VIRTUAL_HOST
$ docker run -d -p 80:80 -p 443:443 \
-v /path/to/htpasswd:/etc/nginx/htpasswd \
-v /path/to/certs:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
You'll need apache2-utils on the machine you plan to create de htpasswd file. Follow these instructions
If you need to configure Nginx beyond what is possible using environment variables, you can provide custom configuration files on either a proxy-wide or per-VIRTUAL_HOST
basis.
To add settings on a proxy-wide basis, add your configuration file under /etc/nginx/conf.d
using a name ending in .conf
.
This can be done in a derived image by creating the file in a RUN
command or by COPY
ing the file into conf.d
:
FROM jwilder/nginx-proxy
RUN { \
echo 'server_tokens off;'; \
echo 'client_max_body_size 100m;'; \
} > /etc/nginx/conf.d/my_proxy.conf
Or it can be done by mounting in your custom configuration in your docker run
command:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
To add settings on a per-VIRTUAL_HOST
basis, add your configuration file under /etc/nginx/vhost.d
. Unlike in the proxy-wide case, which allows mutliple config files with any name ending in .conf
, the per-VIRTUAL_HOST
file must be named exactly after the VIRTUAL_HOST
.
In order to allow virtual hosts to be dynamically configured as backends are added and removed, it makes the most sense to mount an external directory as /etc/nginx/vhost.d
as opposed to using derived images or mounting individual configuration files.
For example, if you have a virtual host named app.example.com
, you could provide a custom configuration for that host as follows:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/vhost.d:/etc/nginx/vhost.d:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
$ { echo 'server_tokens off;'; echo 'client_max_body_size 100m;'; } > /path/to/vhost.d/app.example.com
If you are using multiple hostnames for a single container (e.g. VIRTUAL_HOST=example.com,www.example.com
), the virtual host configuration file must exist for each hostname. If you would like to use the same configuration for multiple virtual host names, you can use a symlink:
$ { echo 'server_tokens off;'; echo 'client_max_body_size 100m;'; } > /path/to/vhost.d/www.example.com
$ ln -s www.example.com /path/to/vhost.d/example.com