From d4de11b5a6d6070ecefea6445636e4e01bc38ca3 Mon Sep 17 00:00:00 2001 From: sstringer Date: Fri, 27 May 2022 09:45:45 +0100 Subject: [PATCH 1/3] Libvirt documentation enhancements --- .gitignore | 1 + libvirt/README.md | 193 ++++++++++++++++++++++++++++++++-------------- 2 files changed, 135 insertions(+), 59 deletions(-) diff --git a/.gitignore b/.gitignore index f9f334a5b..db452b504 100644 --- a/.gitignore +++ b/.gitignore @@ -26,3 +26,4 @@ shell.nix venv **/.envrc **/.direnv +.vscode \ No newline at end of file diff --git a/libvirt/README.md b/libvirt/README.md index a3b220462..77aaa7dc2 100644 --- a/libvirt/README.md +++ b/libvirt/README.md @@ -3,100 +3,175 @@ * [Terraform cluster deployment with Libvirt](#terraform-cluster-deployment-with-libvirt) * [Requirements](#requirements) * [Quickstart](#quickstart) - * [Bastion](#bastion) -* [Highlevel description](#highlevel-description) + * [Bastion](#bastion) +* [High-level description](#high-level-description) * [Customization](#customization) - * [QA deployment](#qa-deployment) - * [Pillar files configuration](#pillar-files-configuration) - * [Use already existing network resources](#use-already-existing-network-resources) - * [Autogenerated network addresses](#autogenerated-network-addresses) + * [QA deployment](#qa-deployment) + * [Pillar files configuration](#pillar-files-configuration) + * [Use already existing network resources](#use-already-existing-network-resources) + * [Autogenerated network addresses](#autogenerated-network-addresses) * [Advanced Customization](#advanced-customization) - * [Terraform Parallelism](#terraform-parallelism) + * [Terraform Parallelism](#terraform-parallelism) * [Troubleshooting](#troubleshooting) +This sub directory contains the cloud specific part for usage of this repository with libvirt. Looking for another provider? See [Getting started](../README.md#getting-started). -This sub directory contains the cloud specific part for usage of this -repository with libvirt. Looking for another provider? See -[Getting started](../README.md#getting-started). +## Requirements + You will need to have a working libvirt/kvm setup for using the libvirt-provider. (refer to upstream doc of [libvirt providerđź”—](https://github.com/dmacvicar/terraform-provider-libvirt)). -# Requirements + You need the xslt processor `xsltproc` installed on the system. With it terraform is able to process xsl files. - You will need to have a working libvirt/kvm setup for using the libvirt-provider. (refer to upstream doc of [libvirt providerđź”—](https://github.com/dmacvicar/terraform-provider-libvirt)). +## Quickstart - You need the xslt processor `xsltproc` installed on the system. With it terraform is able to process xsl files. + This is a short quickstart guide. -# Quickstart + For detailed information and deployment options have a look at `terraform.tfvars.example`. -This is a very short quickstart guide. +1) **Network configuration** -For detailed information and deployment options have a look at `terraform.tfvars.example`. + The deployment requires two separate networks, an internal, isolated network and a second network for external access. Although the code can create both networks, for the most predictable results, you should create the networks in advance. + + The isolated network can use KVM's 'default' network, which is usually the easiest option. The IPs addresses in the isolated network will be set by the code and will be static. + + The network for external access is known in the code as bridge_device. The IP addresses on this network are expected to be configured by DHCP. -1) **Rename terraform.tfvars:** + The following lines show an example of how the terraform.tfvars file can be configured for using the default network + and a bridge network named br0: - ``` - mv terraform.tfvars.example terraform.tfvars - ``` + ```HCL + # Use already existing network + network_name = "default" - Now, the created file must be configured to define the deployment. + # Use bridge device on hypervisor + bridge_device = "br0" + ``` - **Note:** Find some help in for IP addresses configuration below in [Customization](#customization). + Ensure that network_name and bridge_device are not using the same underlying bridge, this will cause problems for clustered systems. -2) **Generate private and public keys for the cluster nodes without specifying the passphrase:** +2) **SBD** - Alternatively, you can set the `pre_deployment` variable to automatically create the cluster ssh keys. + For libvirt based configurations, the code uses SBD as the STONITH method for clustering. The SBD disk is created in the storage pool and therefore iSCSI is not required. When configuring the terraform.tfvars file, ensure that iSCSI is not enabled. - ``` - mkdir -p ../salt/sshkeys - ssh-keygen -f ../salt/sshkeys/cluster.id_rsa -q -P "" - ``` +3) **Rename terraform.tfvars:** - The key files need to have same name as defined in [terraform.tfvars](./terraform.tfvars.example). + ```bash + mv terraform.tfvars.example terraform.tfvars + ``` -3) **[Adapt saltstack pillars manually](../pillar_examples/)** or set the `pre_deployment` variable to automatically copy the example pillar files. + Now, the created file must be configured to define the deployment. -4) **Configure Terraform Access to Libvirt** + **Note:** Find some help in for IP addresses configuration below in [Customization](#customization). - Set `qemu_uri = "qemu:///system"` in `terraform.tfvars` if you want to deploy on the local system - or according to [Libvirt Providerđź”—](https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs#the-connection-uri). +4) **Generate private and public keys for the cluster nodes without specifying the passphrase:** - Also make sure the images references in `terraform.tfvars` are existing on your system. + Alternatively, you can set the `pre_deployment` variable to automatically create the cluster ssh keys. -5) **Prepare a NFS share with the installation sources** + ```bash + mkdir -p ../salt/sshkeys + ssh-keygen -f ../salt/sshkeys/cluster.id_rsa -q -P "" + ``` - Add the NFS paths to `terraform.tfvars`. The NFS server is not yet part of the deployment and must already exist. + The key files need to have same name as defined in [terraform.tfvars](./terraform.tfvars.example). - - **Note:** Find some help in [SAP software documentation](../doc/sap_software.md) +5) **[Adapt saltstack pillars manually](../pillar_examples/)** or set the `pre_deployment` variable to automatically copy the example pillar files. -6) **Deploy** +6) **Configure Terraform Access to Libvirt** - The deployment can now be started with: + Set `qemu_uri = "qemu:///system"` in `terraform.tfvars` if you want to deploy on the local system + or according to [Libvirt Providerđź”—](https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs#the-connection-uri). - ``` - terraform init - terraform workspace new myexecution - # If you don't create a new workspace , the string `default` will be used as workspace name. - # This can led to conflicts to unique names in a shared server. - terraform workspace select myexecution - terraform plan - terraform apply - ``` + Also make sure the image references in `terraform.tfvars` exist on your system and are present in the pool you have specified. - To get rid of the deployment, destroy the created infrastructure with: +7) **Prepare a NFS share with the installation sources** - ``` - terraform destroy - ``` + Add the NFS paths to `terraform.tfvars`. The NFS server is not yet part of the deployment and must already exist. + + **Note:** Find some help in [SAP software documentation](../doc/sap_software.md) + +8) **Image preparation** + + Image files in the qcow2 format are required for this deployment. Currently, SUSE do not supply qcow2 images for SLES for SAP Application and therefore some preparation is required. Download the latest openstack qcow2 image from [the SUSE download page](https://www.suse.com/download/sles/). + + A series of commands are necessary to resize and sanitise the image. Run the following commands in the same directory as the downloaded image ensure you replace with the name of your image: + + ```bash + echo 'net.ipv6.conf.all.disable_ipv6 = 1' > 99-disable-ipv6.conf + qemu-img resize 20G + virt-sysprep --operations abrt-data,backup-files,bash-history,blkid-tab,crash-data,cron-spool,customize,dhcp-client-state,dhcp-server-state,dovecot-data,logfiles,machine-id,mail-spool,net-hostname,net-hwaddr,pacct-log,package-manager-cache,pam-data,passwd-backups,puppet-data-log,rh-subscription-manager,rhn-systemid,rpm-db,samba-db-log,script,smolt-uuid,ssh-hostkeys,ssh-userdir,sssd-db-log,tmp-files,udev-persistent-net,utmp,yum-uuid --root-password password:linux --copy-in 99-disable-ipv6.conf:/etc/sysctl.d -a + rm 99-disable-ipv6.conf + ``` + + Copy the adapted image to the libvirt pool that you intend to use for the project. + + Next, you must adapt the file cloud-config.tpl so that the installation will register will the required repositories. For this you'll need your SUSE for SAP registration code and the associated email address. Edit the file so that it matches this, ensuring you replace and with your details: + + ```yaml + #cloud-config + + cloud_config_modules: + - runcmd + cloud_final_modules: + - scripts-user + runcmd: + - | + # add any command here + SUSEConnect -e -r + /usr/sbin/SUSEConnect --de-register + /usr/sbin/SUSEConnect --cleanup + rpm -e --nodeps sles-release + rpm -e --nodeps sles-release-DVD + rpm -e --nodeps sles-release-POOL + SUSEConnect -p SLES_SAP/15.3/x86_64 -e -r + SUSEConnect -p sle-module-basesystem/15.3/x86_64 + SUSEConnect -p sle-module-desktop-applications/15.3/x86_64 + SUSEConnect -p sle-module-server-applications/15.3/x86_64 + SUSEConnect -p sle-ha/15.3/x86_64 -e -r + SUSEConnect -p sle-module-sap-applications/15.3/x86_64 + # make sure docs are installed (needed for prometheus-hanadb_exporter) + sed -i 's#rpm.install.excludedocs.*#rpm.install.excludedocs = no#g' /etc/zypp/zypp.conf + ``` + +9) **Deploy** + + The deployment can now be started with: + + ```bash + terraform init + terraform workspace new myexecution + # If you don't create a new workspace , the string `default` will be used as workspace name. + # This can led to conflicts to unique names in a shared server. + terraform workspace select myexecution + terraform plan + terraform apply + ``` + + As cloud-init can take a little time to return, it is not unusual to see an error shortly after 'terraform deploy'. You may see an error like this: + + ```bash + Error: Invalid index + on modules/hana_node/salt_provisioner.tf line 66, in module "hana_provision": + 66: public_ips = libvirt_domain.hana_domain.*.network_interface.0.addresses.0 + The given key does not identify an element in this collection value: the collection has no elements. + ``` + + If this error occurs, simple re-run `terraform apply` after ~30 seconds. + + To get rid of the deployment, destroy the created infrastructure with: + + ```bash + terraform destroy + ``` ## Bastion A bastion host makes no sense in this setup. -# Highlevel description +## High-level description This Terraform configuration deploys SAP HANA in a High-Availability Cluster on SUSE Linux Enterprise Server for SAP Applications in **Libvirt**. -![Highlevel description](../doc/highlevel_description_openstack.png) +![High-level description](../doc/highlevel_description_openstack.png) The infrastructure deployed includes: @@ -109,9 +184,9 @@ By default, this configuration will create 3 instances in Libvirt: one for suppo Once the infrastructure is created by Terraform, the servers are provisioned with Salt. -# Customization +## Customization -In order to deploy the environment, different configurations are available through the terraform variables. These variables can be configured using a `terraform.tfvars` file. An example is available in [terraform.tfvars.example](./terraform.tvars.example). To find all the available variables check the [variables.tf](./variables.tf) file. +In order to deploy the environment, different configurations are available through the terraform variables. These variables can be configured using a `terraform.tfvars` file. An example is available in [terraform.tfvars.example](./terraform.tfvars.example). To find all the available variables check the [variables.tf](./variables.tf) file. ## QA deployment @@ -148,9 +223,9 @@ Example based on `192.168.135.0/24` address range: | S/4HANA or NetWeaver IPs | `netweaver_ips` | `192.168.135.30`, `192.168.135.31`, `192.168.135.32`, `192.168.135.33` | Addresses for the ASCS, ERS, PAS and AAS. The sequence will continue if there are more AAS machines | | S/4HANA or NetWeaver virtual IPs | `netweaver_virtual_ips` | `192.168.135.34`, `192.168.135.35`, `192.168.135.36`, `192.168.135.37` | The first virtual address will be the next in the sequence of the regular S/4HANA or NetWeaver addresses | -# Advanced Customization +## Advanced Customization -## Terraform Parallelism +### Terraform Parallelism When deploying many scale-out nodes, e.g. 8 or 10, you should must pass the [`-nparallelism=n`đź”—](https://www.terraform.io/docs/cli/commands/apply.html#parallelism-n) parameter to `terraform apply` operations. @@ -158,7 +233,7 @@ It "limit[s] the number of concurrent operation as Terraform walks the graph." The default value of `10` is not sufficient because not all HANA cluster nodes will get provisioned at the same. A value of e.g. `30` should not hurt for most use-cases. -# Troubleshooting +## Troubleshooting In case you have some issue, take a look at this [troubleshooting guide](../doc/troubleshooting.md). From e84503f10f7fe27c9303d70e12a88a9d0649c61e Mon Sep 17 00:00:00 2001 From: sstringer Date: Mon, 30 May 2022 11:16:32 +0100 Subject: [PATCH 2/3] Edited the location of changes --- libvirt/README.md | 225 ++++++++++++++++++++++++---------------------- 1 file changed, 120 insertions(+), 105 deletions(-) diff --git a/libvirt/README.md b/libvirt/README.md index 77aaa7dc2..cf1df6be0 100644 --- a/libvirt/README.md +++ b/libvirt/README.md @@ -3,93 +3,58 @@ * [Terraform cluster deployment with Libvirt](#terraform-cluster-deployment-with-libvirt) * [Requirements](#requirements) * [Quickstart](#quickstart) - * [Bastion](#bastion) -* [High-level description](#high-level-description) + * [Bastion](#bastion) +* [Highlevel description](#highlevel-description) * [Customization](#customization) - * [QA deployment](#qa-deployment) - * [Pillar files configuration](#pillar-files-configuration) - * [Use already existing network resources](#use-already-existing-network-resources) - * [Autogenerated network addresses](#autogenerated-network-addresses) + * [QA deployment](#qa-deployment) + * [Pillar files configuration](#pillar-files-configuration) + * [Use already existing network resources](#use-already-existing-network-resources) + * [Autogenerated network addresses](#autogenerated-network-addresses) * [Advanced Customization](#advanced-customization) - * [Terraform Parallelism](#terraform-parallelism) + * [Terraform Parallelism](#terraform-parallelism) * [Troubleshooting](#troubleshooting) -This sub directory contains the cloud specific part for usage of this repository with libvirt. Looking for another provider? See [Getting started](../README.md#getting-started). -## Requirements +This sub directory contains the cloud specific part for usage of this +repository with libvirt. Looking for another provider? See +[Getting started](../README.md#getting-started). - You will need to have a working libvirt/kvm setup for using the libvirt-provider. (refer to upstream doc of [libvirt providerđź”—](https://github.com/dmacvicar/terraform-provider-libvirt)). - You need the xslt processor `xsltproc` installed on the system. With it terraform is able to process xsl files. +# Requirements -## Quickstart +1) **General KVM Requirements** - This is a short quickstart guide. + You will need to have a working libvirt/kvm setup for using the libvirt-provider. (refer to upstream doc of [libvirt providerđź”—](https://github.com/dmacvicar/terraform-provider-libvirt)). - For detailed information and deployment options have a look at `terraform.tfvars.example`. + You need the xslt processor `xsltproc` installed on the system. With it terraform is able to process xsl files. -1) **Network configuration** +2) **Network Requirements** - The deployment requires two separate networks, an internal, isolated network and a second network for external access. Although the code can create both networks, for the most predictable results, you should create the networks in advance. + The deployment requires two separate networks, an internal, isolated network and a second network for external access. Although the code can create both networks, for the most predictable results, you should create the networks in advance. - The isolated network can use KVM's 'default' network, which is usually the easiest option. The IPs addresses in the isolated network will be set by the code and will be static. - - The network for external access is known in the code as bridge_device. The IP addresses on this network are expected to be configured by DHCP. - - The following lines show an example of how the terraform.tfvars file can be configured for using the default network - and a bridge network named br0: - - ```HCL - # Use already existing network - network_name = "default" - - # Use bridge device on hypervisor - bridge_device = "br0" - ``` + + The isolated network can use KVM's 'default' network, which is usually the easiest option. The IPs addresses in the isolated network will be set by the code and will be static. + + The network for external access is known in the code as bridge_device. The IP addresses on this network are expected to be configured by DHCP. + + The following lines show an example of how the terraform.tfvars file can be configured for using the default network + and a bridge network named br0: + + ```HCL + # Use already existing network + network_name = "default" + + # Use bridge device on hypervisor + bridge_device = "br0" + ``` Ensure that network_name and bridge_device are not using the same underlying bridge, this will cause problems for clustered systems. -2) **SBD** - - For libvirt based configurations, the code uses SBD as the STONITH method for clustering. The SBD disk is created in the storage pool and therefore iSCSI is not required. When configuring the terraform.tfvars file, ensure that iSCSI is not enabled. - -3) **Rename terraform.tfvars:** - - ```bash - mv terraform.tfvars.example terraform.tfvars - ``` - - Now, the created file must be configured to define the deployment. - - **Note:** Find some help in for IP addresses configuration below in [Customization](#customization). - -4) **Generate private and public keys for the cluster nodes without specifying the passphrase:** - - Alternatively, you can set the `pre_deployment` variable to automatically create the cluster ssh keys. - - ```bash - mkdir -p ../salt/sshkeys - ssh-keygen -f ../salt/sshkeys/cluster.id_rsa -q -P "" - ``` - - The key files need to have same name as defined in [terraform.tfvars](./terraform.tfvars.example). - -5) **[Adapt saltstack pillars manually](../pillar_examples/)** or set the `pre_deployment` variable to automatically copy the example pillar files. +3) **SBD** -6) **Configure Terraform Access to Libvirt** + For libvirt based configurations, the code uses SBD as the STONITH method for clustering. The SBD disk is created in the storage pool and therefore iSCSI is not required. When configuring the terraform.tfvars file, ensure that iSCSI is not enabled. - Set `qemu_uri = "qemu:///system"` in `terraform.tfvars` if you want to deploy on the local system - or according to [Libvirt Providerđź”—](https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs#the-connection-uri). - - Also make sure the image references in `terraform.tfvars` exist on your system and are present in the pool you have specified. - -7) **Prepare a NFS share with the installation sources** - - Add the NFS paths to `terraform.tfvars`. The NFS server is not yet part of the deployment and must already exist. - - **Note:** Find some help in [SAP software documentation](../doc/sap_software.md) - -8) **Image preparation** +4) **Image Preparation** Image files in the qcow2 format are required for this deployment. Currently, SUSE do not supply qcow2 images for SLES for SAP Application and therefore some preparation is required. Download the latest openstack qcow2 image from [the SUSE download page](https://www.suse.com/download/sles/). @@ -111,9 +76,9 @@ This sub directory contains the cloud specific part for usage of this repository cloud_config_modules: - runcmd - cloud_final_modules: + cloud_final_modules: - scripts-user - runcmd: + runcmd: - | # add any command here SUSEConnect -e -r @@ -132,46 +97,77 @@ This sub directory contains the cloud specific part for usage of this repository sed -i 's#rpm.install.excludedocs.*#rpm.install.excludedocs = no#g' /etc/zypp/zypp.conf ``` -9) **Deploy** +# Quickstart - The deployment can now be started with: +This is a very short quickstart guide. - ```bash - terraform init - terraform workspace new myexecution - # If you don't create a new workspace , the string `default` will be used as workspace name. - # This can led to conflicts to unique names in a shared server. - terraform workspace select myexecution - terraform plan - terraform apply - ``` +For detailed information and deployment options have a look at `terraform.tfvars.example`. - As cloud-init can take a little time to return, it is not unusual to see an error shortly after 'terraform deploy'. You may see an error like this: +1) **Rename terraform.tfvars:** - ```bash - Error: Invalid index - on modules/hana_node/salt_provisioner.tf line 66, in module "hana_provision": - 66: public_ips = libvirt_domain.hana_domain.*.network_interface.0.addresses.0 - The given key does not identify an element in this collection value: the collection has no elements. - ``` - - If this error occurs, simple re-run `terraform apply` after ~30 seconds. - - To get rid of the deployment, destroy the created infrastructure with: - - ```bash - terraform destroy - ``` + ``` + mv terraform.tfvars.example terraform.tfvars + ``` + + Now, the created file must be configured to define the deployment. + + **Note:** Find some help in for IP addresses configuration below in [Customization](#customization). + +2) **Generate private and public keys for the cluster nodes without specifying the passphrase:** + + Alternatively, you can set the `pre_deployment` variable to automatically create the cluster ssh keys. + + ``` + mkdir -p ../salt/sshkeys + ssh-keygen -f ../salt/sshkeys/cluster.id_rsa -q -P "" + ``` + + The key files need to have same name as defined in [terraform.tfvars](./terraform.tfvars.example). + +3) **[Adapt saltstack pillars manually](../pillar_examples/)** or set the `pre_deployment` variable to automatically copy the example pillar files. + +4) **Configure Terraform Access to Libvirt** + + Set `qemu_uri = "qemu:///system"` in `terraform.tfvars` if you want to deploy on the local system + or according to [Libvirt Providerđź”—](https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs#the-connection-uri). + + Also make sure the images references in `terraform.tfvars` are existing on your system. + +5) **Prepare a NFS share with the installation sources** + + Add the NFS paths to `terraform.tfvars`. The NFS server is not yet part of the deployment and must already exist. + + - **Note:** Find some help in [SAP software documentation](../doc/sap_software.md) + +6) **Deploy** + + The deployment can now be started with: + + ``` + terraform init + terraform workspace new myexecution + # If you don't create a new workspace , the string `default` will be used as workspace name. + # This can led to conflicts to unique names in a shared server. + terraform workspace select myexecution + terraform plan + terraform apply + ``` + + To get rid of the deployment, destroy the created infrastructure with: + + ``` + terraform destroy + ``` ## Bastion A bastion host makes no sense in this setup. -## High-level description +# Highlevel description This Terraform configuration deploys SAP HANA in a High-Availability Cluster on SUSE Linux Enterprise Server for SAP Applications in **Libvirt**. -![High-level description](../doc/highlevel_description_openstack.png) +![Highlevel description](../doc/highlevel_description_openstack.png) The infrastructure deployed includes: @@ -184,9 +180,9 @@ By default, this configuration will create 3 instances in Libvirt: one for suppo Once the infrastructure is created by Terraform, the servers are provisioned with Salt. -## Customization +# Customization -In order to deploy the environment, different configurations are available through the terraform variables. These variables can be configured using a `terraform.tfvars` file. An example is available in [terraform.tfvars.example](./terraform.tfvars.example). To find all the available variables check the [variables.tf](./variables.tf) file. +In order to deploy the environment, different configurations are available through the terraform variables. These variables can be configured using a `terraform.tfvars` file. An example is available in [terraform.tfvars.example](./terraform.tvars.example). To find all the available variables check the [variables.tf](./variables.tf) file. ## QA deployment @@ -223,9 +219,9 @@ Example based on `192.168.135.0/24` address range: | S/4HANA or NetWeaver IPs | `netweaver_ips` | `192.168.135.30`, `192.168.135.31`, `192.168.135.32`, `192.168.135.33` | Addresses for the ASCS, ERS, PAS and AAS. The sequence will continue if there are more AAS machines | | S/4HANA or NetWeaver virtual IPs | `netweaver_virtual_ips` | `192.168.135.34`, `192.168.135.35`, `192.168.135.36`, `192.168.135.37` | The first virtual address will be the next in the sequence of the regular S/4HANA or NetWeaver addresses | -## Advanced Customization +# Advanced Customization -### Terraform Parallelism +## Terraform Parallelism When deploying many scale-out nodes, e.g. 8 or 10, you should must pass the [`-nparallelism=n`đź”—](https://www.terraform.io/docs/cli/commands/apply.html#parallelism-n) parameter to `terraform apply` operations. @@ -233,10 +229,29 @@ It "limit[s] the number of concurrent operation as Terraform walks the graph." The default value of `10` is not sufficient because not all HANA cluster nodes will get provisioned at the same. A value of e.g. `30` should not hurt for most use-cases. -## Troubleshooting +# Troubleshooting In case you have some issue, take a look at this [troubleshooting guide](../doc/troubleshooting.md). +### Terraform fails shortly after deploy + +The images use cloud-init which can take a little time to return, so it is not unusual to see an error shortly after 'terraform deploy'. You may see an error like this: + + ```bash + Error: Invalid index + on modules/hana_node/salt_provisioner.tf line 66, in module "hana_provision": + 66: public_ips = libvirt_domain.hana_domain.*.network_interface.0.addresses.0 + The given key does not identify an element in this collection value: the collection has no elements. + ``` + + If this error occurs, simple re-run `terraform apply` after ~30 seconds. + + To get rid of the deployment, destroy the created infrastructure with: + + ```bash + terraform destroy + ``` + ### Resources have not been destroyed Sometimes it happens that created resources are left after running @@ -274,4 +289,4 @@ with elevated privileges: `sudo ls -Faihl /var/lib/libvirt/images/` If some package installation fails during the salt provisioning, the most possible thing is that some repository is missing. -Add the new repository with the needed package and try again. +Add the new repository with the needed package and try again. \ No newline at end of file From dd9170d98362abeda692d5668c2051a72c8be750 Mon Sep 17 00:00:00 2001 From: Eike Waldt Date: Thu, 9 Jun 2022 13:21:18 +0200 Subject: [PATCH 3/3] doc: libvirt - correct network and image requirements --- libvirt/README.md | 33 ++++++++++++--------------------- 1 file changed, 12 insertions(+), 21 deletions(-) diff --git a/libvirt/README.md b/libvirt/README.md index cf1df6be0..2119ba6d1 100644 --- a/libvirt/README.md +++ b/libvirt/README.md @@ -30,26 +30,15 @@ repository with libvirt. Looking for another provider? See 2) **Network Requirements** - The deployment requires two separate networks, an internal, isolated network and a second network for external access. Although the code can create both networks, for the most predictable results, you should create the networks in advance. + The deployment requires two separate networks. One for bootstrapping the machines via DHCP and a second dedicated/isolated network for the deployment itself. + The bootstrap network can use libvirts's 'default' network, which is usually the easiest option. + A `sudo virsh net-dumpxml default | grep "bridge name"` will show you the bridge you can set as e.g. `bridge_device = virbr0` in `terraform.tfvars`. + + The dedicated/isolated network can either be already existing (set `network_name = "mynet"`) or being created based on the `iprange = ...` parameter. + Be sure to match a potentially existing network with the `iprange = ...` parameter. + The IPs addresses in this network will be set to static DHCP entries in libvirt's network config. - The isolated network can use KVM's 'default' network, which is usually the easiest option. The IPs addresses in the isolated network will be set by the code and will be static. - - The network for external access is known in the code as bridge_device. The IP addresses on this network are expected to be configured by DHCP. - - The following lines show an example of how the terraform.tfvars file can be configured for using the default network - and a bridge network named br0: - - ```HCL - # Use already existing network - network_name = "default" - - # Use bridge device on hypervisor - bridge_device = "br0" - ``` - - Ensure that network_name and bridge_device are not using the same underlying bridge, this will cause problems for clustered systems. - 3) **SBD** For libvirt based configurations, the code uses SBD as the STONITH method for clustering. The SBD disk is created in the storage pool and therefore iSCSI is not required. When configuring the terraform.tfvars file, ensure that iSCSI is not enabled. @@ -67,9 +56,11 @@ repository with libvirt. Looking for another provider? See rm 99-disable-ipv6.conf ``` - Copy the adapted image to the libvirt pool that you intend to use for the project. + Copy the adapted image to the libvirt pool that you intend to use for the project and that you referenced via `storage_pool = ...`. - Next, you must adapt the file cloud-config.tpl so that the installation will register will the required repositories. For this you'll need your SUSE for SAP registration code and the associated email address. Edit the file so that it matches this, ensuring you replace and with your details: + To register the SLES image as a SLES4SAP image you can use this little hack which is rolled out via cloud-init. **You do not have to do this if your image is already a SLES4SAP**. + + Adapt the file cloud-config.tpl so that the installation will register will the required repositories for SLES4SAP. For this you'll need your SUSE for SAP registration code and the associated email address. Edit the file so that it matches this, ensuring you replace and with your details: ```yaml #cloud-config @@ -289,4 +280,4 @@ with elevated privileges: `sudo ls -Faihl /var/lib/libvirt/images/` If some package installation fails during the salt provisioning, the most possible thing is that some repository is missing. -Add the new repository with the needed package and try again. \ No newline at end of file +Add the new repository with the needed package and try again.