Skip to content
This repository has been archived by the owner on Nov 15, 2024. It is now read-only.

How to build and run Secure VM using Ultravisor on a OpenPOWER machine

Ram Pai edited this page Dec 27, 2021 · 60 revisions

Overview

At a high level, the following steps must be performed in order to run Ultravisor-secured guests on an OpenPOWER machine:

  1. Build a PNOR image with Ultravisor enabled Firmware.
  2. Install the PNOR image.
  3. Install OS on the Host.
  4. Build and Install NUVOTON TPM enabled Kernel.
  5. Build and install QEMU with support for Secure VM.
  6. Install and configure a VM.
  7. Configure Secure Memory to enable PEF/Ultravisor.
  8. Build svm-tools and svm-password-agent rpm.
  9. Convert the VM to a secure VM.
  10. Validate the confidentiality of the secure VM.

Prerequisites: Protected Execution Framework(PEF) capable Open-power platform, with a Nuvuton TPM.
Witherspoon and Mihawk platforms from IBM, have this capability.

We expect the list to grow as more OpenPower vendors enable this technology on their platforms.

[ Please contact_ * Ram Pai ([email protected]/[email protected]) or * Guerney Hunt([email protected]) for access to OpenPOWER systems for experimentation]

Step by step description of each task follows.

Build the PNOR

Video showing the steps to build the Ultravisor enabled PNOR.

NOTE: The steps below are verified to work on Ubuntu 18.04, Ubuntu 20.04 and on RHEL8.3.

These steps do not work on Fedora32 and above, because some packages fail to compile; the compiler seems to be enforcing stricter rules.

1. git clone https://github.com/rampai/op-build.git

2. cd op-build 


3. git submodule init && git submodule update 

#install all the dependencies to build the pnor.
4. bash dependency_install.sh 

#build the pnor for mihawk or witherspoon.
5. ./op-build mihawk_ultravisor_defconfig  && ./op-build
                OR
   ./op-build witherspoon_ultravisor_defconfig && ./op-build

   This takes about two hours to complete.  The PNOR is generated in the file 
   mihawk.pnor.squashfs.tar or witherspoon.pnor.squashfs.tar; depending on your config file, 
   in the directory output/images

Install the PNOR image

Video showing steps to Flash Ultravisor enabled PNOR on POWER9 Mihawk

NOTE: step (1) and (2) must be handled by an expert. These steps can be skipped, if the pnor image is signed by your platform vendor.

Step 1) Shutdown the machine, change the secure-boot jumper setting:

Step 2) Turn off the firmware's field mode:

  • ssh into the BMC and run the following commands:

     $ ssh root@<BMC-address>
     root@witherspoon:~# fw_printenv fieldmode 
     fieldmode=true
     root@witherspoon:~# fw_setenv fieldmode false
     root@witherspoon:~# fw_printenv fieldmode 
     fieldmode=false
     root@witherspoon:~# systemctl unmask usr-local.mount  
     Removed /etc/systemd/system/usr-local.mount.
     root@witherspoon:~# reboot
     root@witherspoon:~# Connection to <BMC-address> closed by remote host.
     Connection to <BMC-address> closed.
    
  • Wait for the BMC to be back up.

Step 3) Flash the new PNOR that contains the Ultravisor firmware:

  • Login into the BMC web interface

  • Click on 'Server Configuration'

  • Click on 'Firmware'

  • Scroll down to 'Server images'

  • Under there, click on the button 'Choose file'

  • Choose the firmware file on you local disk. The one that was created using PNOR tools.

  • Click 'Upload firmware'

  • After about 2 min, the firmware files will be available as one of the firmware options under 'Server Image':

  • Click 'Activate'

  • Click 'ACTIVATE FIRMWARE FILE WITHOUT REBOOTING SERVER'

  • Click 'Continue'

  • The image state will turn into 'Activating' first and after about 3-4 minutes, will turn into 'Active'

  • Click on 'Server power' near the top right corner.

  • Click on 'Orderly - OS shuts down - then server reboots'

  • Click on the 'Reboot' button.

  • It will ask you to confirm. Click on the 'Reboot' button.

Step 4) Boot up the server:

  • Connect to the console of the machine, to confirm that it boots up correctly:

     $ ssh root@<BMC-address> -p 2200
    

Install OS on the Host

Video showing Fedora33 Install on the POWER9 Mihawk

The steps below help install Fedora33.

(replace the values appropriately for your ennvironment)

If the petiboot networking is not up, configure it by either enabling DHCP or by setting the ip by following the steps below

(a) select 'System Configuration'

(b) select 'Static IP configuration'

(c) select the network interface name that you want to configure (eg: enP52p1s0f2)

(d) In IP/mask, fill in the ip w.z.y.z/subnetmask

(e) In Gateway, fill in gateway_ip

(f) in DNS Server(s): Fill in DNS_server_ip

(g) Select OK.

  • Select OK.

  • Follow the on screen instructions to install the OS.

Build and Install a NUVOTON TPM enabled kernel.

Video showing kernel build with enablement for Nuvoton TPM driver

  • git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git

  • cd linux-2.6

  • git checkout v5.10-rc6 -b 5.10-rc6
    (any kernel version above 5.9 is fine).

  • get a good config file for your target machine. Generally its found in /boot/config* file on the target machine. Copy the content of that file to .config file in the local directory. cp /boot/config-5.9.12-200.fc33.ppc64le .config

  • enable NUVOTON TPM driver in the .config file, by setting CONFIG_TCG_TIS_I2C_NUVOTON=m

  • make oldconfig

  • make && make modules_install && make install

  • reboot the machine and boot up on the newly built kernel.

Build QEMU

Video showing QEMU build with Secure VM capability

Build a QEMU that has the Secure VM capability. This step will not be needed once upstream QEMU has this capability integrated.

  • Run the following commands on the host OS:

         $ git clone https://git.qemu.org/git/qemu.git
    
         $ git checkout v5.2.0 -b 5.2.0-pef
    
         $ wget https://github.com/farosas/qemu/commit/e25370c503ecde1698bbfaff2f965c5b43e8bef6.patch \
                        -O 0001-spapr-Add-capability-for-Secure-PEF-VMs.patch
    
         $ git am 0001-spapr-Add-capability-for-Secure-PEF-VMs.patch
    
         $ ./configure --target-list=ppc64-softmmu
    
         $ make -j $(nproc)
    
         $ sudo make install
    
  • All the QEMU binaries will be installed in /usr/local/bin

  • Disable SELinux: edit /etc/selinux/config and set it to disabled. Or configure it to allow access to files in /usr/local/bin. Without this change, libvirt fails to launch QEMU from /usr/local/bin.

Install and configure VM

Video showing steps to build and configure a normal VM, with the ability to convert to secure VM later.

Step 1) Make a virtual machine and install Fedora in it:

  • Launch virt-install:

     $ /usr/bin/virt-install --connect=qemu:///system \
                  --hvm --accelerate \
                  --name 'fedora33' \
                  --machine pseries \
                  --memory=8192 \
                  --vcpu=8,maxvcpus=8,sockets=1,cores=8,threads=1 \
                  --location https://dl.fedoraproject.org/pub/fedora-secondary/releases/33/Everything/ppc64le/os/ \
                  --nographics \
                  --serial pty \
                  --memballoon model=virtio \
                  --controller type=scsi,model=virtio-scsi \
                  --disk path=/var/lib/libvirt/images/fedora33-secure.qcow2,bus=scsi,size=30,format=qcow2 \
                  --network=bridge=virbr0,model=virtio,mac=52:54:00:0a:90:bc \
                  --mac=52:54:00:0a:90:bc \
                  --noautoconsole \
                  --boot emulator=/usr/local/bin/qemu-system-ppc64 \
                  --extra-args="console=tty0 inst.text console=hvc0" && \
                  virsh console fedora33
    
  • It will ask for a VNC password. Provide the VNC password of your choice and confirm it again. It will then provide you with an IP address and the VNC port to connect.

  • In my case I am given 'Please manually connect your vnc client to 192.168.122.133:1 to begin the install.'

  • In a separate terminal, create an ssh tunnel to that port:

     $ ssh \<OPENPOWER-MACHINE\> -L 2202:192.168.122.133:5901
    
  • Connect to the VNC port using vncviewer localhost:2202

  • Type in the VNC password that was created in earlier and start the installation of Fedora.

  • During installation select 'encrypt my data' to encrypt the root disk and use a suitable disk password.

  • Once installation is complete the VM is shut down. Restart it and make sure it is good and healthy:

     $ virsh start fedora33 --console
    
  • It should ask for the root disk password, and then boot up and you should be able to login.

  • Now shutdown the VM:

     $ virsh shutdown fedora33
    

Step 2) Add a TPM device to the VM and add secure capability:

  • Using virsh edit fedora33, Add the qemu namespace to the kvm domain node:

     <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
    
  • And add the following the XML excerpt in the devices section:

     <tpm model='spapr-tpm-proxy'>
     	<backend type='passthrough'>
     		<device path='/dev/tpmrm0'/>
     	</backend>
     </tpm>
    
  • And finally add the following in a separate section:

     <qemu:commandline>
     	<qemu:arg value='-M'/>
     	<qemu:arg value='pseries,cap-svm=off'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-scsi-pci.disable-legacy=on'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-scsi-pci.disable-modern=off'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-scsi-pci.iommu_platform=on'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-blk-pci.disable-legacy=on'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-blk-pci.disable-modern=off'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-blk-pci.iommu_platform=on'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-net-pci.disable-legacy=on'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-net-pci.disable-modern=off'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-net-pci.iommu_platform=on'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-serial-pci.disable-legacy=on'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-serial-pci.disable-modern=off'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-serial-pci.iommu_platform=on'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-balloon-pci.disable-legacy=on'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-balloon-pci.disable-modern=off'/>
     	<qemu:arg value='-global'/>
     	<qemu:arg value='virtio-balloon-pci.iommu_platform=on'/>
     </qemu:commandline>
    
     ' pseries,cap-svm=off '  option in the qemu commandline primes the SVM capability in the VM, 
     but does not enable it. All the other virtio related args must be explicitly specified as above.
     There are patches from David Gibson which will un-neccessiate these options. Once the patches
     are merged, the options can be deleted.
    
  • Save the XML file and exit the editor.

  • Here is a sample XML file in full. ```

  <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>fedora33</name>
  <uuid>49c508c1-b664-4ea0-b260-a9e85dcfd694</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://fedoraproject.org/fedora/33"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>8</vcpu>
  <os>
    <type arch='ppc64le' machine='pseries-5.2'>hvm</type>
    <boot dev='hd'/>
  </os>
  <cpu mode='custom' match='exact' check='none'>
    <model fallback='forbid'>POWER9</model>
    <topology sockets='1' dies='1' cores='8' threads='1'/>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/local/bin/qemu-system-ppc64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/fedora33-secure.qcow2'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <model name='spapr-pci-host-bridge'/>
      <target index='0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:0a:90:bc'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
   </interface>
    <serial type='pty'>
      <target type='spapr-vio-serial' port='0'>
        <model name='spapr-vty'/>
      </target>
      <address type='spapr-vio' reg='0x30000000'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
      <address type='spapr-vio' reg='0x30000000'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <tpm model='spapr-tpm-proxy'>
      <backend type='passthrough'>
        <device path='/dev/tpmrm0'/>
      </backend>
    </tpm>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </rng>
    <panic model='pseries'/>
  </devices>
 <qemu:commandline>
    <qemu:arg value='-M'/>
    <qemu:arg value='pseries,cap-svm=on'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-scsi-pci.disable-legacy=on'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-scsi-pci.disable-modern=off'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-scsi-pci.iommu_platform=on'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-blk-pci.disable-legacy=on'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-blk-pci.disable-modern=off'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-blk-pci.iommu_platform=on'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-net-pci.disable-legacy=on'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-net-pci.disable-modern=off'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-net-pci.iommu_platform=on'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-serial-pci.disable-legacy=on'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-serial-pci.disable-modern=off'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-serial-pci.iommu_platform=on'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-balloon-pci.disable-legacy=on'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-balloon-pci.disable-modern=off'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='virtio-balloon-pci.iommu_platform=on'/>
  </qemu:commandline>
</domain>

Configure secure memory to enable the Ultravisor.

Video showing steps to enable Ultravisor and enable secure-capability in the VM

  • The following command on the host, displays the size of secure-memory configured:

     $ sudo nvram -p ibm,skiboot  --print-config
     
    
  • To change the amount of secure memory configured to 64 GB do the following:

     $ sudo nvram -p ibm,skiboot --update-config smf_mem_amt=0x1000000000
    
      (Recommend to configure atleast about 16GB of secure memory)
    
  • To verify the change

    $ sudo nvram -p ibm,skiboot  --print-config
    "ibm,skiboot" Partition
    --------------------------
    smf_mem_amt=0x1000000000
    
  • Shutdown the machine:

     $ sudo shutdown now
    
  • Powercycle the machine

     ssh into the BMC and invoke the following command:
    
    $ ssh root@<BMC-address>   
        $ root@witherspoon:~# obmcutil --wait poweroff && obmcutil --wait poweron
        
    
    This can be done through the web console too.
     
    
    
  • On reboot, the presence of the file /sys/firmware/ultravisor/msglog indicates that the Ultravisor is enabled.

  • Enable the cap-svm capability of the fedora33 VM.

    <qemu:arg value='pseries,cap-svm=on'/>
    

Build svm-tools and svm-password-agent rpm

Video showing the steps to create the svm-tools and svm-password-agent rpm

  • This step can be skipped, if you have already installed the rpms.

  • install rpm-build and poetry rpms

          dnf install poetry rpm-build
    
  • clone the svm-tools repository

      git clone https://github.com/open-power/svm-tools.git
    
  • make the rpms

       cd svm-tools
       make rpm
    

    The the svm-tools and svm-password agent rpm is now ready under RPMDIR directory.

Convert the VM to a secure VM.

Video showing the steps to convert a VM to secure VM

  • Boot up the VM and login into the VM.

  • Install the svm-password-agent rpm. This agent enables VM to boot without user interaction

    $ sudo dnf install svm-password-agent-0.1.0-1.noarch.rpm
    
    [NOTE: the password agent depends on the availability of the nc command. Make sure it is installed. If not install it]
    
    $ sudo dnf install nmap-ncat
    
  • set an environment variable KERN_VER, capturing the version of the kernel.

     ```export KERN_VER=5.8.15-301.fc33.ppc64le```
    

    [ please set the KERN_VER to the correct version, depending on the version of the kernel installed in your VM ]

  • Regenerate the initramfs to absorb the SVM password agent.

     mkinitrd /boot/initramfs-${KERN_VER}.img ${KERN_VER}    
    
  • Install the svm-tool RPM. The tools help create digital-data to secure boot the VM. The digital data is called Enter-Secure-Mode blob, also known as ESM-blob.

     $ sudo dnf install svm-tool-0.1.0-1.noarch.rpm
    
  • Create a directory called 'key'

        $ mkdir key; cd key
    
  • Copy the kernel image into the directory.

       $ cp /boot/vmlinuz-${KERN_VER} .
    
  • Copy the initramfs image into the directory.

       $ cp /boot/initramfs-${KERN_VER}.img .
    
  • Collect the kernel command line parameters in a file name 'cmd' and append svm related options.

        $ cat /proc/cmdline > cmd
        $ sed -ie 's/$/ svm=on xive=off/g' cmd
    

    [NOTE: xive is currently not supported in SVMs, so explicitly switch it off.]

  • Get the host TPM's public wrapping key

      Run the following command on the host:
    
        $ sudo tssreadpublic -ho 81800001 -opem tpmrsapubkey.pem
    
      copy the file tpmrsapubkey.pem to the 'key' directory on the guest.
    
  • create a file key_file.txt and capture the root disk passphrase in that file.

      $ echo "root disk passphrase"  > key_file.txt
    
      eg: if your root disk passphrase is "abc123":   echo "abc123" > key_file.txt
    
  • Generate the owner's public/private key. This is a one-time step taken by the owner of the VM. This generates two files rsaprivkey and rsapubkey:

     $ svm-tool esm generate -p rsapubkey -s rsaprivkey
     
    [ NOTE: these two keys can be reused for rekeying other VMs owned by the owner]
    
    
  • Create an svm_blob.yml file and fill in the following contents:

    [ replace ${KERN_VER} with the correct value. Also replace ${cmd} with the contents of the file cmd ]
    
    
        - origin:
             pubkey:        "rsapubkey"
             seckey:              "rsaprivkey"
        - recipient:
             comment:       "HOSTNAME TPM"
             pubkey:              "tpmrsapubkey.pem"
        - file:
             name:              "rootd"
             path:              "key_file.txt"
        - digest:
             args:              "${cmd}"
             initramfs:       "initramfs-${KERN_VER}.img"
             kernel:              "vmlinuz-${KERN_VER}"
    
  • Here's what the fields in the file mean:

    • origin/pubkey: is the file containing the owner's public key.
    • origin/seckey: is the file containing the owners private key.
    • recipient/comment: is a comment that can be anything.
    • recipient/pubkey: is the file containing the public wrapping key of the host TPM.
    • digest/args: is the kernel command line of the kernel running in the guest. It generally is the output of the command cat /proc/cmdline
      • Of course you have to make sure that svm=on xive=off is added to the cmdline.
    • digest/initramfs: is the file containing the initramfs of the kernel that you want to boot securely.
    • digest/kernel: is the file containing the the kernel that you want to boot securely.
    • file: this section is optional. This section captures any secrets to be made available to the SVM. There has to be one file section per secret
    • file/name : the name of the secret. This name acts a handle for the SVM to procure the value of the secret. For root disk passphrase, "rootd" is a reserved name. Use it to capture root disk passphrase.
    • file/path : the name of the file holding the secret. The secret cannot be larger than 64bytes.

Here is an example svm_blob.yml file

- origin:
         pubkey:   "rsapubkey"
         seckey:   "rsaprivkey"
- recipient:
         comment:  "etna TPM"
         pubkey:   "tpmrsapubkey.pem"
- file:
         name:  "rootd"
         path:  "key_file.txt"
- digest:
         args:  "BOOT_IMAGE=/vmlinuz-5.10.15-100.fc32.ppc64le root=/dev/mapper/ff
edora-root ro rd.lvm.lv=fedora/root rd.luks.uuid=luks-2b39ebb5-2910-421a-a8aa-a44
e9bd4457da rd.lvm.lv=fedora/swap console=hvc0 svm=on xive=off swiotlb=262144"
         initramfs: "initramfs-5.10.15-100.fc32.ppc64le.img"
         kernel:    "vmlinuz-5.10.15-100.fc32.ppc64le"
  • Generate the ESM blob (ESM stands for Enter Secure Mode).

      [make sure binutils package is installed on the system  ```dnf install binutils```] 
    

    $ svm-tool esm make -b test_esmb.dtb -y svm_blob.yml

      - This will generate the ESM blob in the file `test_esmb.dtb`.
    
      - NOTE: ESM blob is sometimes referred to as ESM-operand.
    
  • Add the ESM blob to the initramfs and generate a new initramfs:

      $ svm-tool svm add -i initramfs-${KERN_VER}.img -b test_esmb.dtb -f esmb-initrd.img
    
  • Edit the file /etc/default/grub to add svm=on xive=off to the GRUB_CMDLINE_LINUX variable. [NOTE: the string should exactly match, the string captured in the ESM blob. Yes, it is very sensitive. Any mismatch fails to boot the secure VM]

      This is my grub file:
      ```
      GRUB_TIMEOUT=5
      GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
      GRUB_DEFAULT=saved
      GRUB_DISABLE_SUBMENU=true
      GRUB_TERMINAL_OUTPUT="ofconsole"
      GRUB_CMDLINE_LINUX="rd.luks.uuid=luks-0219595c-4d6b-43e0-93b6-323be505d6c0 console=hvc0 svm=on xive=off"
      GRUB_DISABLE_RECOVERY="true"
      GRUB_ENABLE_BLSCFG=true
      GRUB_TERMINFO="terminfo -g 80x24 console"
      GRUB_DISABLE_OS_PROBER=true
      ```
    
  • Replace the initramfs in the /boot directory: . Make a copy of the original initramfs first and then replace it with the newly generated initramfs.

      $ sudo cp /boot/initramfs-${KERN_VER}.img/boot/initinitramfs-${KERN_VER}.img.orig
    
         $ sudo cp esmb-initrd.img /boot/initramfs-${KERN_VER}.img 
             
     
    
    
  • Regenerate the GRUB config:

         $ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
    
  • Reboot the VM.

    . On next reboot, the VM should boot in secure mode.

    . Note: There is a bug in the host kernel, where sometimes the VM will exit with error -4. This happens when the VM is in the process of switching from normal-mode to secure-mode.

    . Don't panic. Destroy and Restart the VM with virsh destroy fedora33; virsh start fedora33 --console and it should boot up.

  • Login and verify if it is a secure VM with:

      [root@localhost ~]# cat /sys/devices/system/cpu/svm
    
     . The output must be '1'
    

Validate confidentiality of Secure VM.

Video showing steps to validate the confidentiality of the Secure VM.

[NOTE: the dump operation on SVM of size larger than 4GB, can hang the system. There is a known memory exhaustion bug in the ultravisor. To validate this step, change the size of the SVM to 4G and try the steps below].

  • Take a dump of the Secure VM, and check for any plain text strings.

    . Note this not a perfect secrecy checker. We have to build better ways of validating this.

    $ virsh dump fedora33 /home/f33.secure.dump --memory-only --reset

    $ strings /home/f33.secure.dump | grep netdev

HOW TO UPDATE JUST THE ULTRAVISOR ?

  • Clone the ultravisor repository.

git clone https://github.com/open-power/ultravisor.git

cd ultravisor

  • Checkout the correct version of the ultravisor. The commit-id used in the PNOR is 52857eb. The commit-id that fixes the H_CEDE hcall bug is e9c930f.

git checkout e9c930f -b eurosys-eval

  • compile the ultravisor

make -j$(nproc)

 A ultravisor binary file ultra.lid.xz.stb is generated.
  • In a separate terminal, copy the ultravisor binary file ultra.lid.xz.stb to the BMC.

scp ultra.lid.xz.stb root@<BMC-address>:/usr/local/share/pnor/UVISOR

  • Login into the BMC

ssh root@<BMC-address>

  • Restart the server

obmcutil chassisoff && sleep 10 && systemctl restart mboxd.service && obmcutil chassison && sleep 10 && obmcutil poweron

         you should start seeing the bootup messages on the console.
  • Once the machine is fully booted, verify the version of the active ultravisor.

head -1 /sys/firmware/ultravisor/msglog