Skip to content

Commit

Permalink
October 2021 upstream (#147)
Browse files Browse the repository at this point in the history
* Sync internal repo with the external repo

* Remove beta folder

* Remove dev files

* Removed code to create application load balancer

* Removed code to create application load balancer

* Updated documentation folder

* Removed few unused files

* revert back outlook plugin change in httpd.conf.j2

* revert back outlook plugin change in httpd.conf.j2

* revert back test outlook-addin to work behind nginx-ingress port 32080

* revert back was_fixes_repository_url changes
  • Loading branch information
nitinjagjivan authored Oct 8, 2021
1 parent 497ef1d commit ec8844f
Show file tree
Hide file tree
Showing 96 changed files with 1,904 additions and 566 deletions.
53 changes: 27 additions & 26 deletions documentation/QUICKSTART.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,25 @@
# Quick start for setting up HCL Connections and Component Pack using Ansible automation

This is just an example of setting up your first HCL Connections and Component Pack environment including Customizer configured.
This is just an example of setting up your first HCL Connections and Component Pack environment including Customizer configured.

To set this up, you will need at least three machines (for this example, let us say we use CentOS 7):
To set this up, you will need at least four machines (for this example, let us say we use CentOS 7):

- web.internal.example.com is going to host, in this example, only Nginx and Haproxy. This is needed here only for the Customizer. At least 1CPU and 2G of RAM are preferable.
- ansible.internal.example.com is going to run Ansible commands (i.e. Ansible controller). Typical laptop grade environment should be suffice.
- web.internal.example.com is going to host, in this example, only Nginx and Haproxy. This is needed here only for the Customizer. At least 1CPU and 2G of RAM are preferable.
- connections.internal.example.com is going to host IBM WebSphere, IHS and HCL Connections. We will put here also OpenLDAP with 10 users, and IBM DB2. NFS will be set for shared data and message stores. HCL Connections will be deployed as a small topology (single JVM). Here you need at least two CPUs and at least 16G of RAM for having everything self contained.
- cp.internal.example.com is going to host Kubernetes 1.18.18, be NFS server for persistent volumes, Docker Registry, and Component Pack on top of it. You need at least 32G of RAM to install full offering and at least 8 CPUs.
- cp.internal.example.com is going to host Kubernetes 1.18.18, be NFS server for persistent volumes, Docker Registry, and Component Pack on top of it. You need at least 32G of RAM to install full offering and at least 8 CPUs.

Once the installation is done, we will access our HCL Connections login page through https://connections.example.com/

Example inventory files for this Quick Start can be found in environments/examples/cnx7/quick_start folder.
Example inventory files for this Quick Start can be found in environments/examples/cnx7/quick_start folder.

# Setting up your environment

For this example, we will use only three machines described above for all the work. One of them is going to be Ansible controller, another one is going to serve the files like described in README.md.
For this example, we will use only four machines described above for all the work. One of them is going to be Ansible controller, another one is going to serve the files like described in README.md.

## Before you start

* Ensure that all three machines have DNS set in a consistant way:
* Ensure that all three machines have DNS set in a consistant way:

```
[web.internal.example.com]$ hostname
Expand Down Expand Up @@ -47,7 +48,7 @@ cp
cp
```

* Update all three machines using yum or dnf before you proceed
* Update all three machines using yum update or dnf before you proceed. This is important otherwise deployment will fail due to missing packages.
* Update your /etc/hosts on each of those machines with proper ip/short/long name
* On all three machines, ensure that the content of /etc/environment file is like follows:

Expand All @@ -60,22 +61,22 @@ LC_ALL=en_US.utf-8

## Setting up the user

Let's say we will use user called ansible to execute Ansible commands. For this example, we will use web.internal.example.com for running Ansible commands.
Let's say we will use user called ansible to execute Ansible commands. For this example, we will use ansible.internal.example.com for running Ansible commands.

As a very first step, create user called ansible on all three machines, and ensure it is in sudoers file the way that it can sudo without being asked for the password.
As a very first step, create user called ansible on all four machines, and ensure it is in sudoers file the way that it can sudo without being asked for the password.

Once this is done, on web.internal.example.com (which is going to be our Ansible controller) switch to user ansible (sudo su - ansible) and create your key pair with:
Once this is done, on ansible.internal.example.com (which is going to be our Ansible controller) switch to user ansible (sudo su - ansible) and create your key pair with:

```
[ansible@web ~]$ ssh-keygen -t rsa
[ansible@ansible ~]$ ssh-keygen -t rsa
```

Just press Enter for each question (leave everything as default).

Go to your ansible/.ssh folder, and create file called config with following content:

```
[ansible@web ~]$ cat ~/.ssh/config
[ansible@ansible ~]$ cat ~/.ssh/config
Host *
User ansible
ForwardAgent yes
Expand All @@ -87,7 +88,7 @@ Host *
IdentityFile ~/.ssh/id_rsa
```

Allow ansible user to SSH to other two machines without being asked for the password by coping the content of id_rsa.pub to authorized_keys on other hosts:
Allow ansible user to SSH to other three machines without being asked for the password by coping the content of id_rsa.pub to authorized_keys on other hosts:

```
[ansible@web ~]$ cat ~/.ssh/id_rsa.pub
Expand All @@ -100,7 +101,7 @@ ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCvupayZq/4h+vrzisZAa4Yx/JqbgRFPu5WSAO5YIkw
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCvupayZq/4h+vrzisZAa4Yx/JqbgRFPu5WSAO5YIkws/3fdXGcFFFx2DXdIcvFT+70SSE0Cwh5520K1ypK6/M2WXhJhu7gz/7eldWFOuFvT9XF4zRq90A5DemwYJALclHz3Kecq5/uE7hrSg7ojYRGow3qPO4F5kfiFSH/mRoxRj2990tbHOfNV3R45A6qoPk/POFU61DFFt/o42jm5IsKg40mFCRUIOez477b51CgIhEnMeL6tIPjdM7jYblnpf+gMeg8Ulz4OGdBhqQhJJfeRyYMRghxkb9/2uXIlhlCxHZH+HnIru67X4CAVpHAuO3pFX/9L9NEUooaLh723$1 [email protected]
```

From web.internal.example.com you should be able now, as ansible user, to SSH to connections.internal.example.com and cp.internal.example.com without being prompted for the password.
From ansible.internal.example.com you should be able now, as ansible user, to SSH to web.internal.example.com, connections.internal.example.com and cp.internal.example.com without being prompted for the password.

As the last step, customize .bashrc for your ansible user like this:

Expand All @@ -123,15 +124,15 @@ export ANSIBLE_HOST_KEY_CHECKING=False
eval "$(ssh-agent)"
```

Environment variable that you are setting here will save you the time with typing yes every time Ansible hits new hosts. The last command will ensure that you use only key and the keychain from ansible user itself.
Environment variable that you are setting here will save you the time with typing yes every time Ansible hits new hosts. The last command will ensure that you use only key and the keychain from ansible user itself.

### ...but if you use root user

Please note that you need to either disable password login for root user in your SSH configuration, or run Ansible playbooks with using usernames/passwords, because the scripts will otherwise block when they try to go from machine to machine. Due to both security and practical reasons, we don't recommend to use root user directly for this.
Please note that you need to either disable password login for root user in your SSH configuration, or run Ansible playbooks with using usernames/passwords, because the scripts will otherwise block when they try to go from machine to machine. Due to both security and practical reasons, we don't recommend to use root user directly for this.

## Installing Ansible

Ansible needs to be installed only on the controller machine, in our example it is web.internal.example.com
Ansible needs to be installed only on the controller machine, in our example it is ansible.internal.example.com

```
[ansible@web ~]$ sudo yum install ansible
Expand All @@ -149,17 +150,17 @@ ansible 2.9.15
python version = 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
```

Once this is done, you are ready to roll!
Once this is done, you are ready to roll!

# Setting up full HCL Connections stack

Don't forget to install git, and to clone [email protected]:HCL-TECH-SOFTWARE/connections-automation.git to be able to proceed.
Don't forget to install git, and to clone [email protected]:HCL-TECH-SOFTWARE/connections-automation.git to be able to proceed.

## Setting up file share

As explained in README.md, you need first to download all needed packages to be able to install them (as you would have to even if you are doing it manually).
As explained in README.md, you need first to download all needed packages to be able to install them (as you would have to even if you are doing it manually).

For this example, we will download all the packages to /tmp folder on connections.internal.example.com and serve them from there. In general, keeping anything in /tmp is not a good idea, but since we are setting up playground, it doesn't really matter here. You can do it of course in any other folder that you chose, just ensure that you have enough disk space for downloading everything.
For this example, we will download all the packages to /tmp folder on connections.internal.example.com and serve them from there. In general, keeping anything in /tmp is not a good idea, but since we are setting up playground, it doesn't really matter here. You can do it of course in any other folder that you chose, just ensure that you have enough disk space for downloading everything.

Once you downloaded all needed packages in your /tmp folder on connections.internal.example.com, the easiest way to start a very simple server just for this example is to use Ruby:

Expand All @@ -177,7 +178,7 @@ And you are good to go - you've just started web server on port 8001 inside your
There are two things you need to adapt before you try the installation:

- Change FQDNs in inventory.ini to match your own FQDNs.
- Edit group_vars/all.yml and change in mandatory section the FQDN towards your Haproxy (cnx_component_pack_ingress) and the URL which will be used as an entry point to your HCL Connections environment (dynamicHost on CNX side - cnx_application_ingress).
- Edit group_vars/all.yml and change in mandatory section the FQDN towards your Haproxy (cnx_component_pack_ingress) and the URL which will be used as an entry point to your HCL Connections environment (dynamicHost on CNX side - cnx_application_ingress).

## Setting up HCL Connections with all the dependencies

Expand All @@ -203,7 +204,7 @@ To set up Connections Docs 2.0.1, just run:
ansible-playbook -i environments/examples/cnx7/quick_start/inventory.ini playbooks/hcl/setup-connections-docs.yml
```

Note: if you are using old format of inventory files, it is all backwards compatible. The only thing that you need to add there is cnx_was_servers to your connections inventory (to make it same as done for docs already).
Note: if you are using old format of inventory files, it is all backwards compatible. The only thing that you need to add there is cnx_was_servers to your connections inventory (to make it same as done for docs already).

# Validating your installation

Expand All @@ -213,7 +214,7 @@ Note: if you are using old format of inventory files, it is all backwards compat
# Frequently Given Answers

* Please check the troubleshooting section if you have any issues.
* Yes, you have to have DNS (proper DNS) set up. It will not work, and it can not work, with using only local hosts files due to various reasons which are not the topic of this automation.
* We don't plan to automate any type of DNS setup.
* Yes, you have to have DNS (proper DNS) set up. It will not work, and it can not work, with using only local hosts files due to various reasons which are not the topic of this automation.
* We don't plan to automate any type of DNS setup.
* Postfix (or any other mail server) is intentionally not installed.
* Feel free to customize it to the best of your knowledge, it's under Apache licence after all and that was the intention.
18 changes: 18 additions & 0 deletions documentation/howtos/other_useful_playbooks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Optional but Useful Playbooks

This document describes a list of playbooks for optional tasks.

## Set applications user/group mapping to All Authenticated
This playbook is useful when removing anonymous access from Connections apps.
To set roles to "All Authenticated in Application's Realm", add the following to the inventory
```
restrict_reader_access: true
```
To set roles to "All Authenticated in Trusted Realms", add the following to the inventory
```
restrict_reader_access__trusted_realms: true
```
then run this playbook:
```
ansible-playbook -i environments/examples/cnx7/connections playbooks/hcl/connections-restrict-access.yml
```
1 change: 1 addition & 0 deletions environments/examples/cnx7/db2/group_vars/all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ cnx_force_repopulation: True

cnx_enable_moderation: true
cnx_enable_invite: true
cnx_enable_full_icec: true

enable_prometheus_jmx_exporter: True

Expand Down
12 changes: 0 additions & 12 deletions playbooks/hcl/cleanup.yml

This file was deleted.

7 changes: 7 additions & 0 deletions playbooks/hcl/cleanup/cleanup-containerd-images.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Cleanup
---
- name: Cleanup Containerd Images
hosts: k8s_masters, k8s_workers
become: true
roles:
- roles/hcl/cleanup/cleanup-containerd-images
31 changes: 31 additions & 0 deletions playbooks/hcl/cleanup/cleanup-nfs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Cleanup
---
- name: Unmount NFS mounts on nfs_servers
hosts: nfs_servers
become: true
roles:
- roles/hcl/cleanup/cleanup-nfs-data/cleanup-dmgr-data

- name: Unmount NFS mounts on component pack master
hosts: component_pack_master
become: true
roles:
- roles/hcl/cleanup/cleanup-nfs-data/cleanup-cp-data

- name: Unmount Docs and Viewer NFS shares for Conversion, Docs and Viewer and Unmount CNX data NFS shares for Viewer
hosts: docs_servers, conversion_servers, viewer_servers
become: true
roles:
- roles/hcl/cleanup/cleanup-nfs-data/cleanup-nfs-docs-data

- name: Cleanup NFS data on NFS master servers
hosts: nfs_servers
become: true
roles:
- roles/hcl/cleanup/cleanup-nfs-data

- name: Cleanup WebSphere and Connections folders
hosts: dmgr, was_servers
become: true
roles:
- roles/hcl/cleanup/cleanup-nfs-data/cleanup-websphere-folders
47 changes: 47 additions & 0 deletions playbooks/hcl/cleanup/cleanup.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Cleanup
---
- name: Cleanup WAS
vars:
force_destroy_kubernetes: false
force_destroy_db2: false
force_destroy_ihs: false
force_destroy_oracle: false
import_playbook: cleanup-was.yml

- name: Cleanup IHS
vars:
force_destroy_kubernetes: false
force_destroy_db2: false
force_destroy_websphere: false
force_destroy_oracle: false
import_playbook: cleanup-ihs.yml

- name: Cleanup Kubernetes
vars:
force_destroy_ihs: false
force_destroy_db2: false
force_destroy_websphere: false
force_destroy_oracle: false
import_playbook: cleanup-k8s.yml

- name: Cleanup DB2 database
vars:
force_destroy_ihs: false
force_destroy_kubernetes: false
force_destroy_websphere: false
force_destroy_oracle: false
import_playbook: cleanup-db2.yml

- name: Cleanup Oracle database
vars:
force_destroy_ihs: false
force_destroy_kubernetes: false
force_destroy_websphere: false
force_destroy_db2: false
import_playbook: cleanup-oracle.yml

- name: Cleanup NFS
import_playbook: cleanup-nfs.yml

- name: Cleanup Containerd Images
import_playbook: cleanup-containerd-images.yml
47 changes: 47 additions & 0 deletions playbooks/hcl/connections-setup-azure-oidc.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Setup Azure AD OIDC
# Run after Component Pack if needed since it overwrites some Teams Integration config
---
- name: Gather facts
hosts: dmgr
tasks: []

- name: Start Dmgr
hosts: dmgr
become: true
roles:
- roles/third_party/ibm/wasnd/was-dmgr-start

- name: Start WAS Nodes
hosts: was_servers
serial: 1
become: true
roles:
- roles/third_party/ibm/wasnd/was-nodeagent-start

- name: Start CNX Clusters
hosts: dmgr
become: true
roles:
- roles/third_party/ibm/wasnd/was-dmgr-start-cluster

- name: Setup Azure AD OIDC authentication for Connections
hosts: dmgr
become: true
roles:
- roles/hcl/connections/setup_azure_oidc

- name: Set application security roles to All Authenticated in Trusted Realm
import_playbook: connections-restrict-access.yml

- name: Synchronize WAS nodes
hosts: dmgr
become: true
roles:
- roles/third_party/ibm/wasnd/was-dmgr-full-sync-nodes

- name: Restart CNX Clusters
hosts: dmgr
become: true
roles:
- roles/third_party/ibm/wasnd/was-dmgr-stop-cluster
- roles/third_party/ibm/wasnd/was-dmgr-start-cluster
19 changes: 19 additions & 0 deletions playbooks/hcl/fixup/clean-was-temp.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
- name: Clean WAS temp folders
hosts: dmgr, was_servers
become: true
roles:
- roles/hcl/connections/clean_was_temp

- name: Synchronize WAS nodes
hosts: dmgr
become: true
roles:
- roles/third_party/ibm/wasnd/was-dmgr-full-sync-nodes

- name: Restart CNX Clusters
hosts: dmgr
become: true
roles:
- roles/third_party/ibm/wasnd/was-dmgr-stop-cluster
- roles/third_party/ibm/wasnd/was-dmgr-start-cluster
20 changes: 20 additions & 0 deletions playbooks/hcl/fixup/component-pack-configure-es.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
# This playbook runs enable_es_metrics.yml standalone if needed. It is already part of playbooks/hcl/setup-component-pack.yml.

- name: Gather facts
hosts: component_pack_master
tasks: []

- name: Configure ElasticSearch cert, setup for QuickResults and metrics
hosts: component_pack_master
become: true
tasks:
- include_vars: ../../../roles/hcl/beta/component-pack/vars/main.yml
- include: ../../../roles/hcl/beta/component-pack/tasks/enable_es_metrics.yml
vars:
__config_blue_metrics_file: ../../../roles/hcl/beta/component-pack/files/config_blue_metrics.py
__es_merge_template: ../../../roles/hcl/beta/component-pack/templates/merge-es-certificates-to-keystore.j2
__es_metrics_enable_template: ../../../roles/hcl/beta/component-pack/templates/enable-es-for-metrics.j2
when:
- __enable_es_metrics |bool
- not __skip_connections_integration
6 changes: 6 additions & 0 deletions playbooks/hcl/setup-connections-ifix.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,12 @@
roles:
- roles/hcl/connections/ifix

- name: Clean WAS temp folders
hosts: dmgr, was_servers
become: true
roles:
- roles/hcl/connections/clean_was_temp

- name: Start WAS Nodes
hosts: was_servers
become: true
Expand Down
Loading

0 comments on commit ec8844f

Please sign in to comment.