forked from jasonkeene/docs-rabbitmq-staging
-
Notifications
You must be signed in to change notification settings - Fork 0
/
deploying.html.md.erb
274 lines (234 loc) · 9.42 KB
/
deploying.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
---
breadcrumb: VMware RabbitMQ for Tanzu Application Service Documentation
title: Deploying the VMware RabbitMQ for Tanzu Application Service Pre-Provisioned Service
owner: London Services
---
## <a id="default"></a>Default Deployment
Deploying <%= vars.product_full %> through <%= vars.ops_manager %> deploys a
RabbitMQ cluster of **3 nodes** by default.
The deployment includes a single load balancer `haproxy` which spreads connections
on all of the default ports, for all of the shipped plugins across all of the machines within the cluster.
The deployment occurs in a single availability zone (AZ).
The default configuration is for testing purposes only and VMware recommends that
customers have a minimum of **3 RabbitMQ nodes** and **2 HAProxy nodes**
The diagram below shows the default <%= vars.product_short %> pre-provisioned deployment.
<%= image_tag("images/deployment_default.png",
:alt => "The default pre-provisioned deployment shows
Tanzu Application Service and an app within it. The app communnicates to the HAProxy, which communicates to three RabbitMQ nodes. The HAProxy and RabbitMQ nodes are within
VMware RabbitMQ for Tanzu Application Service, which is within availability zone 1.") %>
### <a id="default-consider"></a>Considerations for the Default Deployment
* Provides high availability for the RabbitMQ cluster
* Queues must be configured to be high availability as they are placed on one node by default
* Customers should decide which partition behavior is best suited to their use case. For two nodes 'automatic' is preferred.
* HAProxy is a single point of failure (SPOF)
* The entire deployment is in a single AZ, which does not protect against external failures from failures in hardware, networking, etc.
## <a id="recommended"></a>Recommended Deployment
VMware recommends that <%= vars.product_short %> is deployed across at least two AZs.
Scale RabbitMQ server nodes to an odd number that is greater than or equal to three.
Only use replication of queues where required as it can have a big impact on system performance.
The HAProxy job instance count should also be increased to match the number of AZs to ensure there is a HAProxy located in each AZ.
This removes the HAProxy SPOF and provides further redundancy.
The diagram below shows the recommended <%= vars.product_short %> pre-provisioned deployment.
It shows that when you use this configuration, if a single HAProxy and single RabbitMQ node fail,
your cluster can remain online and apps remain connected.
<%= image_tag("images/deployment_recommended.png",
:alt => "An app is inside 'Tanzu Application Service'. It tries to communicate
to the failed HAProxy and to a functioning HA Proxy. The HA proxy
tries to communicate with the failed RabbitMQ node before communicating
with a functioning RabbitMQ node. These components are inside the VMware RabbitMQ for Tanzu Application Service
service, within Availability Zone 1 and Availability Zone 2.") %>
### <a id="upgrade-from-single-az"></a>Upgrading to the Recommended Deployment from a Single AZ Deployment
It is **not** possible to upgrade to this setup from the default deployment across a single AZ.
This is because the AZ setup cannot be changed after the tile has been deployed for the first time.
This is to protect against data loss when moving jobs between AZs.
### <a id="upgrade-from-multi-az"></a>Upgrading to the Recommended Deployment from a Multi AZ Deployment
If you have deployed the tile across two AZs, but with a single HAProxy instance,
you can migrate to this setup by deploying an additional HAProxy instance through <%= vars.ops_manager %>.
New or re-bound apps to the <%= vars.product_short %> service instance see the IPs of both HAProxies immediately.
Existing bound apps will continue to work, but only using the previously deployed HAProxy IP Address. They can be re-bound as required at your discretion.
### <a id="recommended-consider"></a>Considerations for the Recommended Deployment
* Requires IaaS configuration for AZs ahead of deploying the <%= vars.product_short %> tile
* Application developers are handed the IPs of each deployed HAProxy in their environment variables
* Queues must be configured to be high availability as they are placed on one node by default
* Customers should decide on which partition behavior is best suited to their use case. For three or more nodes 'pause_minority' is preferred.
## <a id="advanced"></a>Advanced Deployment
This deployment builds upon the above recommended deployment and so follows the same upgrade paths.
This allows you to replace the use of HAProxy with your own external load balancer.
You might choose to do this to remove any knowledge of the topology of the <%= vars.product_short %> setup from app developers.
The diagram below shows an advanced <%= vars.product_short %> pre-provisioned deployment.
<%= image_tag("images/deployment_advanced.png",
:alt => "Tanzu Application Service has an existing bound app and a re-bound app.
The existing bound app goes through the same sequence as the diagram in the recommended
deployment. It communicates with the working RabbitMQ node through the HAProxy.
The re-bound app communicates with the customer's load balancer, which then communicates
directly to the working RabbitMQ Node.") %>
**Advantages:**
* App developers do not need to handle multiple IPs for the HAProxy jobs in their apps
**Disadvantages:**
* The load balancer needs to be configured with the IPs of the RabbitMQ Nodes.
These are only be known after the deployment is finished.
The IPs should remain the same during subsequent deployments but there is a risk they might change.
### <a id="upgrade-from-recommended"></a>Upgrading to the Advanced from the Recommended Deployment
It is possible to first deploy with multiple HAProxy jobs, as per the recommended deployment, and later use your own external load balancer.
This can be achieved without downtime to your apps. Follow these steps to do so:
1. Configure your external load balancer to point to the RabbitMQ Node IPs.
2. Configure the DNS name or IP address for the external load balancer (ELB) on the <%= vars.product_short %> tile in <%= vars.ops_manager %>.
3. Deploy the changes. Any new <%= vars.product_short %> service instances **or** any re-bound connections
will use the DNS name or IP address of the ELB in their `VCAP_SERVICES`.
Any existing service instances continue to use the HAProxy IP addresses in their `VCAP_SERVICES`
6. Phase the re-binding of existing apps to update their environment variables.
7. After all apps are updated, reduce the instance count of the `HAProxy` job in <%= vars.ops_manager %> to 0.
9. Deploy the changes.
This approach works as any existing bound apps have their `VCAP_SERVICES` information cached in Cloud Controller and are only updated by a re-bind request.
### <a id="downgrade-to-recommended"></a>Downgrading from the Advanced Deployment to the Recommended Deployment
If you are currently using an external load balancer, then you can move back to using HAProxies instead.
You can achieve this by following the above steps in reverse order and re-instating the HAProxy jobs.
## <a id="resource-requirements"></a>Resource Requirements
The following table shows the default resource and IP requirements for installing the tile:
<table border="1" class="nice">
<tr>
<th>Product</th>
<th>Resource</th>
<th>Instances</th>
<th>Core</th>
<th>Ram</th>
<th>Ephemeral</th>
<th>Persistent</th>
<th>Static IP</th>
<th>Dynamic IP</th>
</tr>
<tr>
<td>RabbitMQ</td>
<td>RabbitMQ node</td>
<td>3</td>
<td>2</td>
<td>8192</td>
<td>16384</td>
<td>30720</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>HAProxy for RabbitMQ</td>
<td>1</td>
<td>1</td>
<td>2048</td>
<td>4096</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td><%= vars.product_short %> service broker</td>
<td>1</td>
<td>1</td>
<td>2048</td>
<td>4096</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>Broker Registrar</td>
<td>1</td>
<td>1</td>
<td>1024</td>
<td>2048</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>Broker Deregistrar</td>
<td>1</td>
<td>1</td>
<td>1024</td>
<td>2048</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>Smoke Tests</td>
<td>1</td>
<td>1</td>
<td>1024</td>
<td>2048</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>RabbitMQ on-demand broker</td>
<td>1</td>
<td>1</td>
<td>1024</td>
<td>8192</td>
<td>1024</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>Register On-Demand Service Broker</td>
<td>1</td>
<td>1</td>
<td>1024</td>
<td>2048</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>Deregister On-Demand Service Broker</td>
<td>1</td>
<td>1</td>
<td>1024</td>
<td>2048</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>Delete All Service Instances</td>
<td>1</td>
<td>1</td>
<td>1024</td>
<td>2048</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>Upgrade All Service Instances</td>
<td>1</td>
<td>1</td>
<td>1024</td>
<td>2048</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>Recreate All Service Instances</td>
<td>1</td>
<td>1</td>
<td>1024</td>
<td>2048</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
</table>
#### Notes:
* The number of `RabbitMQ Node` can be increased if required.