Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New PR for CAP deployment with one example log file. #2

Open
wants to merge 23 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 21 additions & 2 deletions .scf-config-values.template
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@

secrets:
# Password for user 'admin' in the cluster
CLUSTER_ADMIN_PASSWORD: password
CLUSTER_ADMIN_PASSWORD: <password>

# Password for SCF to authenticate with UAA
UAA_ADMIN_CLIENT_SECRET: password
UAA_ADMIN_CLIENT_SECRET: <password>

env:
# Use the public IP address
Expand Down Expand Up @@ -38,3 +38,22 @@ kube:

services:
loadbalanced: true
kubernetes:
authEndpoint: https://<fqdn>:443
prometheus:
kubeStateMetrics:
enabled: true
nginx:
username: admin
password: <password>
firehoseExporter:
dopplerUrl: wss://doppler.<domain>:4443)
uaa:
endpoint: uaa.<domain>:2793
skipSslVerification: "true"
cfIdentityZone: cf
admin:
client: admin
clientSecret: <password>


110 changes: 100 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,25 @@ These scripts are for internal use, as they rely on some fixed presets.
* Get preconfigured scf-config-values.yaml files
* Stop/start your clusters (VMs) by pointing to the config file
* Delete clusters by pointing to the config file


* > additions JML 280219 :
* >use the aks-cluster-config.conf file for your deployment (allows versioning of AZ objects while testing)
* >Deploy CAP + OSBA & components & 1st mysql/rail Application through a menu driven approach.
```bash
1) Quit 9) Create Azure SB 17) AZ List Mysql DBs to Disable
2) Review scfConfig 10) Deploy OSBA 18) AZ Disable SSL Mysql DBs
3) Deploy UAA 11) Pods OSBA 19) Deploy 1st Rails Appl
4) Pods UAA 12) CF API set 20) Deploy Stratos SCF Console
5) Deploy SCF 13) CF Add SB 21) Pods Stratos
6) Pods SCF 14) CF CreateOrgSpace 22) Deploy Metrics
7) Deploy CATALOG 15) CF 1st Service 23) Pods Metrics
8) Pods CATALOG 16) CF 1st Service Status
```
* Added the full automated for unattended install of CAP/AKS new script `deploy_cap_on_aks_automated.sh` instead of the full menu
```bash
1) Quit
2) Review scfConfig
3) Deploy CAP All Steps
```
# Prerequisites

The scripts are based on the steps from our official [Documentation](https://www.suse.com/documentation/cloud-application-platform-1/book_cap_deployment/data/cha_cap_depl-azure.html).
Expand All @@ -24,8 +41,9 @@ Tested with the following versions:

The scripts take configuration files. "example.conf" is the default one, which is used if no configuration file is given.
You need to modify "example.conf" for your needs, while I recommend to just copy it to e.g. "myaks.conf" and modify that
You may use the aks-cluster-config.conf as well if you have multiple attempts/versions in parallel.
```bash
$ <script> -c myaks.conf
$ <script> -c aks-cluster-config.conf
```
This way you can save configurations and even manage various different test and demo clusters.

Expand All @@ -42,26 +60,98 @@ In addition to deploy the AKS cluster, it does
It also creates a directory (e.g. "CAP-AKS-2018-12-14_10h00_test1") for each deployment with
* a log file for that deployment
* the kubeconfig file for your AKS cluster
* a preconfigured scf-config-values.yaml for your CAP deployment
* a preconfigured `scf-config-values.yaml` for your CAP deployment

E.g. run
```bash
./deploy-cap-aks-cluster.sh -c test1.conf
./deploy-cap-aks-cluster.sh -c aks-cluster-config.conf
```


# Deploying CAP on top

deploy-cap-aks-cluster.sh leaves you with a rough guide on what to do next, in order to deploy CAP on the fresh AKS cluster.
The first thing you'll need to is to use the kubeconfig with your current shell, by e.g.
The cluster is ready and now the procedure to deploy the CAP 1.3 on it is following :
* Copy the `init_aks_env.sh` example file containing the definition of ENVVARS required during the CAP deployment script.
```bash
cp init_aks_env.sh init_aks_env_my1.sh
vim init_aks_env_my1.sh
```
* Edit the `AKSDEPLOYID` value to match your deployment above.
```bash
export AKSDEPLOYID="$PWD/"CAP-AKS-2019-08-07_20h13_jmlcluster20"" <- ENVVAR pointing toyour config area for this cluster deployment
export REGION=westeurope <- Your azure region where the ServiceBroker will be deployed
export KUBECONFIG="$AKSDEPLOYID/kubeconfig" <- This is the result of the AKS cluster creation already
export CF_HOME="$AKSDEPLOYID/cfconfig" <- Your Cloudfoundry config will be stored there
export PS1="\u:\w:$AKSDEPLOYID>\[$(tput sgr0)\]"
CFEP=$(awk '/Public IP:/{print "https://api." $NF ".xip.io"}' $AKSDEPLOYID/deployment.log) <- Extract the public IP from the deployment
cf api --skip-ssl-validation $CFEP
```
* Save your file
* initialise your ENVVARs by
```bash
export KUBECONFIG=./CAP-AKS-2018-12-14_10h48_test1/kubeconfig
source init_aks_env_my1.sh
```
* you may review/edit/modify the `scf-config-values.yaml` file that is generated in the `$AKSDEPLOYID/scf-config-values.yaml`

> **__The project CAPnMore contains an updated way to make this, compatible with this AKS deployment__**
> https://github.com/jmlambert78/CAPnMore
> ** This version supports both AKS and any K8S cluster compatible with CAP.

**AN UPDATE CHAPTER WILL BE INCLUDED SOON**

and start with e.g.
OPTION1: Now you may launch the menu driven steps for deploying CAP on the cluster just deployed.
```bash
./deploy_cap_on_aks_by_step.sh
```
* You will get the menu, then go step by step and check that the pods are running prior to engage the next step.(Automation will come soon)
```bash
1) Quit 9) Create Azure SB 17) AZ List Mysql DBs to Disable
2) Review scfConfig 10) Deploy OSBA 18) AZ Disable SSL Mysql DBs
3) Deploy UAA 11) Pods OSBA 19) Deploy 1st Rails Appl
4) Pods UAA 12) CF API set 20) Deploy Stratos SCF Console
5) Deploy SCF 13) CF Add SB 21) Pods Stratos
6) Pods SCF 14) CF CreateOrgSpace 22) Deploy Metrics
7) Deploy CATALOG 15) CF 1st Service 23) Pods Metrics
8) Pods CATALOG 16) CF 1st Service Status
```
NOTE : If you QUIT and come back, the script recovers the ENVVARs that are required from all previous steps (Useful!!).

* 2 Review SCFConfig let you edit the `scf-config-values.yaml` again
* Deploy for each elements is in the right order as there are some dependancies.
* Pods XX will just let you watch the completion of pods deployements
* The CF API is the config of your Cloudfoundry endpoint.
* The Catalog/Azure ServiceBroker/OSBA are required to make dynamic provisionning of Azure Services (eg: DBs)
* Point 17/18 are required to modify the SSL option of the deployed db.
* 19 will deploy your 1st application from github, and make a Curl to check it.
* 20 will deploy the CF dashboard (Stratos)
* 21 will deploy the metrics (monitoring) that you will connect then to the stratos GUI.

OPTION2 : Now you may launch all steps in an unattended with with the following :
```bash
./deploy_cap_on_aks_automated.sh
```
* You will get the menu :
```bash
helm install suse/uaa --name susecf-uaa --namespace uaa --values CAP-AKS-2018-12-14_10h00_test1/scf-config-values.yaml
1) Quit
2) Review scfConfig
3) Deploy APP All Steps
```
* 2 Review SCFConfig let you edit the `scf-config-values.yaml` again
* 3 Deploy all steps one after one in the right order.(unattended deployment).
*

ALLCASES:
when you are there, you have done a great story, and you can start to play efficiently with SCF.
* To Connect the Kubernetes API & the metrics API, go to the Stratos GUI, and in EndPoint, select the one.
* for Kubernetes
* Endpoint : `https://jml-cap-aks-5-rg-xxxxx-yyyyyy.hcp.eastus.azmk8s.io:443` that you may find in the `$AKSDEPLOYID/deployment.log`
* CertAuth : provide your `kubeconfig` file (that resides in the same $AKSDEPLOYID subdir at `connect` time

* For metrics :
* Endpoint: `https://10.240.0.5:7443`
* Username/Password : as provided in the `scf-config-values.yaml`

NB: If you have issues on the OSBA you may use the `svcat` tool to see if the service catalog & osba are well configured on the kubernetes side.

For details see the documentation on how to [Deploy with Helm](https://www.suse.com/documentation/cloud-application-platform-1/book_cap_deployment/data/sec_cap_helm-deploy-prod.html).

Expand Down
42 changes: 42 additions & 0 deletions aks-cluster-config.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Configure your cluster deployment
VERSION=20
# Select LoadBalancer - "azure" for having public/private IPs with AzureLB | "kube" for using e.g. <subdomain>.susecap.net DNS with KubeLB
AZ_LOAD_BALANCER=azure

# If you plan to have a susecap.net domain entry (loadbalanced=true), set your subdomain e.g. example.susecap.net
AZ_SUB_DOMAIN=jml-script$VERSION

# Set the Azure resource group name (e.g. <user>-cap-aks)
AZ_RG_NAME=jml-cap-aks-$VERSION-rg

# Set the kubernetes cluster name
AZ_AKS_NAME=jml-script-cluster-$VERSION

# Set the kubernetes cluster kubernetes version
AZ_AKS_KUBE_VERSION=1.12.8

# Set the cluster region (see https://docs.microsoft.com/en-us/azure/aks/container-service-quotas)
AZ_REGION=westeurope

# Set the ports needed by your CAP deployment (for LB and NSG)
# "80 443 4443 2222 2793" are mandatory, "8443" is for Stratos UI
# "$(echo 2000{0..9})" is needed to run SCF tests
CAP_PORTS="80 443 4443 2222 2793 8443 $(echo 2000{0..9})"

# Set the name of the VM pool (alphanumeric characters only)
AZ_AKS_NODE_POOL_NAME=jmlpool$VERSION

# Set the number of VMs to create
AZ_AKS_NODE_COUNT=3

# Select the Azure node flavour (see https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-sizes-specs)
AZ_AKS_NODE_VM_SIZE=Standard_DS4_v2

# Set the public SSH key name associated with your Azure account
AZ_SSH_KEY=~/.ssh/id_rsa.pub

# Set a new admin username
AZ_ADMIN_USER=scf-admin

# Set the default password for admin
AZ_ADMIN_PSW=psw.cap
16 changes: 10 additions & 6 deletions deploy_cap_aks_cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,8 @@ if [ -e $conffile ]; then
export AZ_AKS_NODE_VM_SIZE
export AZ_SSH_KEY
export AZ_ADMIN_USER
export AZ_ADMIN_PSW
export AZ_AKS_KUBE_VERSION
else
echo -e "Error: Can't find config file: \"$conffile\""
exit 1
Expand All @@ -62,8 +64,10 @@ echo -e "Created resource group: $AZ_RG_NAME"
az aks create --resource-group $AZ_RG_NAME --name $AZ_AKS_NAME \
--node-count $AZ_AKS_NODE_COUNT --admin-username $AZ_ADMIN_USER \
--ssh-key-value $AZ_SSH_KEY --node-vm-size $AZ_AKS_NODE_VM_SIZE \
--kubernetes-version $AZ_AKS_KUBE_VERSION \
--node-osdisk-size=60 --nodepool-name $AZ_AKS_NODE_POOL_NAME 2>&1>> $logfile

export AZ_CLUSTER_FQDN=$(az aks list -g $AZ_RG_NAME|jq '.[].fqdn'|sed -e 's/"//g')
echo -e "Cluster FQDN: $AZ_CLUSTER_FQDN"
export AZ_MC_RG_NAME=$(az group list -o table | grep MC_"$AZ_RG_NAME"_ | awk '{print $1}')
echo -e "Created AKS cluster: $AZ_AKS_NAME in $AZ_MC_RG_NAME"

Expand Down Expand Up @@ -123,7 +127,7 @@ if [ "$mode" = "default" ]; then
--name probe-$i \
--protocol tcp \
--port $i 2>&1>> $logfile

az network lb rule create \
--resource-group $AZ_MC_RG_NAME \
--lb-name $AZ_AKS_NAME-lb \
Expand Down Expand Up @@ -159,17 +163,17 @@ kubectl create -f rbac-config.yaml 2>&1>> $logfile
helm init --service-account tiller 2>&1>> $logfile
echo -e "Initialized helm for AKS"

kubectl create -f suse-cap-psp.yaml 2>&1>> $logfile
echo -e "Applied PodSecurityPolicy: suse-cap-psp"
#kubectl create -f suse-cap-psp.yaml 2>&1>> $logfile
#echo -e "Applied PodSecurityPolicy: suse-cap-psp"

echo -e "\nKubeconfig file is stored to: \"$KUBECONFIG\"\n" | tee -a $logfile

if [ "$mode" = "default" ]; then
internal_ips=($(az network nic list --resource-group $AZ_MC_RG_NAME | jq -r '.[].ipConfigurations[].privateIpAddress'))
extip=\[\"$(echo "${internal_ips[*]}" | sed -e 's/ /", "/g')\"\]
public_ip=$(az network public-ip show --resource-group $AZ_MC_RG_NAME --name $AZ_AKS_NAME-public-ip --query ipAddress --output tsv)
domain=${public_ip}.omg.howdoi.website
cat ./.scf-config-values.template | sed -e '/^# This/d' -e 's/<domain>/'$domain'/g' -e 's/<extip>/'"$extip"'/g' -e '/^services:/d' -e '/loadbalanced/d' > $deploymentid/scf-config-values.yaml
domain=${public_ip}.xip.io
cat ./.scf-config-values.template | sed -e '/^# This/d' -e 's/<domain>/'$domain'/g' -e 's/<extip>/'"$extip"'/g' -e '/^services:/d' -e 's/<fqdn>/'"$AZ_CLUSTER_FQDN"'/g' -e 's/<password>/'"$AZ_ADMIN_PSW"'/g' -e '/loadbalanced/d' > $deploymentid/scf-config-values.yaml
echo -e " Public IP:\t\t\t\t${public_ip}\n \
Private IPs (external_ips for CAP):\t$extip\n \
Suggested DOMAIN for CAP: \t\t\"$domain\"\n\n \
Expand Down
Loading