-
-
Notifications
You must be signed in to change notification settings - Fork 30
Considerations On Setting Up Pade Behind a Reverse Proxy
In this guide we will discuss how to set up Pade behind a reverse proxy. We will look at a single node and at a clustered setup and discuss the differences between the two. To be able to configure your collaboration server behind a reverse proxy you will first need to understand the traffic flows involved and adjust your environment accordingly. In this wiki we will discuss such flows and present a possible approach for this configuration.
This guide assumes that there is one or more nodes of the openfire server in your environment that you wish to use to offer a collaboration service to users within or outside your network (i.e. the internet). Each openfire node has the hazelcast plugin installed and all nodes can talk to each other and operate as a cluster. Furthermore Pade 1.6.3 or later is installed on all nodes and you have opened tcp 7443 and 5222 as well as udp 10000 according to the pade installation guide. Details of the topology can be seen on the graphs below.
On a single node setup there is very little that needs to be done on the Openfire server. If you want to present the service as running on the well known tcp 443 port you will need to adjust the "WebSockets Data Channel" configuration as can be found on the networking section of the Pade plugin in openfire. On that section set the "Public port used for secure websockets data channel externally (client-to-server):" to 443. If you are presenting the service on port 7443 then you should leave this as is. If you do not want to expose any udp ports in your environment you will also need to adjust "Media traffic over TCP" section on the networking section of the Pade plugin in openfire. On that section set "Media traffic over TCP" to enabled and adjust the "Mapped port number:" according to your needs.
If for example you want to present tcp 443 to the clients then change that port to 443. Please note that this setting may have an adverse impact on the quality of the video calls. This may be more noticeable during screen shares that may default to very low resolution.
To set up the Reverse proxy you will need to understand the urls that the service is advertising to clients. In your proxy configuration you will also need to account for the connection upgrade that is required for the CoLiBri websocket. Your proxy solution should also support the forwarding of udp packets. An open source proxy server that seems to work well with Pade is Nginx. Nginx supports proxying of udp streams. To enable udp streams on the server you will need however, to install the relevant module if it does not come default with your installation.
It is important to understand the flows that are involved in this service to be able to set up your equipment accordingly.
The expected behavior is as follows: Users navigate to the url that you have advertised as offering access to the pade service. For the sake of this example let's assume that this url is pade.domain.com. Once the appropriate room is defined and the user is authenticated (assuming that you do not allow anonymous users) the browser will be instructed to create a websocket chanel with the service. This is exposed with a url of pade.domain.com/colibri-ws/ the browser will also attempt to verify availability of the service by attempting connections to a url like pade.domain.com/pade/keepalive. This exercise does not affect connectivity and is mainly designed to cope with firewalls attempting to close the client connection.
The service will also present the public ip address of the Jitsi Video Bridge instance to the browser. This will need to be a routable address like 1.1.1.1 which will need to be reachable to your clients. The port that the browser gets instructed to is udp 10000. In your openfire pade settings you might need to define this public ip in the "IP Address Mapping" section within the network settings. On this section you might need to define a Local Address which will be the local address of your openfire instance i.e. 192.168.1.1 and a Public Address which will be the routable ip address of your proxy that listens on port udp 10000 in our case 1.1.1.1 This may not be required for a single node setup since Pade will pick up these settings automatically.
Assuming a scenario where you want to present this service to your users on the url pade.domain.com the below configuration should work well for tcp traffic:
upstream Pade_7443{
server 192.168.1.1:7443;
}
server {
listen 443 ssl;
server_name pade.domain.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_ssl_verify off;
location /colibri-ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass https://Pade_7443/colibri-ws/;
}
location /ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass https://Pade_7443/ws/;
}
location /pade/ {
proxy_pass https://Pade_7443/pade/;
}
location /http-bind/ {
proxy_pass https://Pade_7443/http-bind/;
}
location / {
proxy_pass https://Pade_7443/ofmeet/;
}
}
The above settings assume that your openfire server has a local ip of 192.168.1.1 and is accessible to the reverse proxy on that ip.
As can be seen from the configuration we are instructing the proxy to perform the following translations:
- requests going to "pade.domain.com/colibri-ws/" will be upgraded to websockets and be forwarded to "https:/192.168.1.1:7443/colibri-ws/"
- requests going to "pade.domain.com/ws/" will also be upgraded and be forwarded to "https:/192.168.1.1:7443/ws/".
- requests going to "pade.domain.com/http-bind/" will be forwarded to "https:/192.168.1.1:7443/http-bind/" this is required for BOSH.
- requests going to "pade.domain.com/pade/" will be forwarded to "https:/192.168.1.1:7443/pade/" .
- requests at "pade.domain.com" will be forwarded to "https:/192.168.1.1:7443/ofmeet".
If you also want to proxy UDP traffic you can use something like this in your stream configuration of the service:
upstream PadeConferencing{
server 192.168.1.1:10000;
}
server {
listen 10000 udp;
proxy_pass PadeConferencing;
}
If you do not want to use UDP traffic you may skip the streams configuration.
For a clustered configuration a few settings will need to be defined on the openfire server that would identify core parameters of your environemnt to the openfire service. More specifically:
- For each node you will need to define the private and public address and the octo id which is used to configure your video bridges in a cluster.
- For each node you will need to define details that will bind the Video Bridges to specific interfaces. These settings are defined by altering the openfire.xml file of your instance and will be visible on the "Settings" section of the pade plugin in the admin UI. You will need to open up udp port 4096 for all nodes and allow traffic from the public ip of each node in. In terms of your networking infrastructure it is important to note that clustering is handled by the octo service which will look at your public ip. Your private ip does not play any role in the JVB clustering.
If you have restricted internet access of your openfire nodes you need to make sure that each node can access the public ip of the other nodes and traffic on udp 4096 is allowed.
Since we are configuring this with reverse proxies this means that each openfire node will need to be able to reach the public ips of the reverse proxies. Furthermore each Video Bridge will need its own unique public ip to present to the clients. This means that if you are clustering for example 10 openfire nodes you will need 10 routable public ips and an equal number of reverse proxy instances. This is required since each video bridge is listening on its own unique public ip and is also advertising itself to clients with this public ip on udp port 10000. In terms of the octo service it needs to be noted that traffic that is being exchanged amongst the nodes on udp port 4096 is unencrypted. This traffic caries information about the sessions that are currently active on each node and allows fail over to take place. It is advisable to restrict access to this data only to the openfire nodes or encapsulate this traffic within a vpn session.
In terms of how you advertise the service to your clients a possible topology is as follows: You advertise a url of lets say pade.domain.com which will resolve to your primary proxy which will also work as a load balancer for all nodes. Then you set up a seperate proxy for each node assuming that you use and do not want to expose your server to udp port 10000 as discussed on the single node section above. For the primary proxy and load balancer you could use a configuration like below if you are using nginx.
upstream Pade_7443{
ip_hash;
server 192.168.1.1:7443;
server 192.168.1.2:7443;
server 192.168.1.3:7443;
server 192.168.1.4:7443;
server 192.168.1.5:7443;
}
server {
listen 443 ssl;
server_name pade.domain.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_ssl_verify off;
location /colibri-ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass https://Pade_7443/colibri-ws/;
}
location /ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass https://Pade_7443/ws/;
}
location /pade/ {
proxy_pass https://Pade_7443/pade/;
}
location /http-bind/ {
proxy_pass https://Pade_7443/http-bind/;
}
location / {
proxy_pass https://Pade_7443/ofmeet/;
}
}
You may modify your load balancing algorithm according to your needs. For each node you will also need to set up its respective proxy for the udp port as is demonstrated above. Each of your openfire nodes will need to advertise a resolvable FQDN like pade_01.domain.com, pade_02.domain.com etc. You could either create cname records for these domains to point to your load balancer or map these hosts to the advertised public ip of each node. Keep in mind that the Video Bridge advertises just an ip for udp traffic 10000 not a host.