Skip to content

Commit

Permalink
Add docs on full local snapshots
Browse files Browse the repository at this point in the history
Signed-off-by: Amory Hoste <[email protected]>
  • Loading branch information
amohoste committed Jun 6, 2022
1 parent 7bde409 commit 9adb916
Showing 1 changed file with 11 additions and 4 deletions.
15 changes: 11 additions & 4 deletions docs/fulllocal_snapshots.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,14 @@
# vHive fulllocal snapshots guide
# vHive full local snapshots

The default snapshots in vHive use an offloading based technique that leaves the shim and other resources running upon shutting down a VM such that it can be re-used in the future. This technique has the advantage that a shim does not have to be recreated and the block and network devices of the previously stopped VM can be reused. This approach does however limit the amount of VMs that can be booted from a snapshot to the amount of VMs that have been offloaded. An alternative approach is to allow loading an arbitrary amount of VMs from a single snapshot by creating a new shim, block and network devices upon loading a snapshot. This functionality can be enabled by running vHive using the `-snapshots -fulllocal` flags. Additionally, the following flags can be used to further configure the fullLocal snapshots
The default snapshots in vHive use an offloading based technique which leaves the shim and other resources running upon shutting down a microVM such that it can be re-used in the future. This technique has the advantage that the shim does not have to be recreated and the block and network devices of the previously stopped microVM can be reused, but limits the amount of microVMs that can be booted from a snapshot to the amount of microVMs that have been offloaded. An alternative approach is to allow loading an arbitrary amount of microVMs from a single snapshot by creating a new shim, block and network devices upon loading a snapshot.

* `-isSparseSnaps`: store the memory file as a sparse file to make the storage size closer to the actual memory utilized by the VM, rather than the memory allocated to the VM

his functionality can be enabled by running vHive using the `-snapshots -fulllocal` flags. Additionally, the following flags can be used to further configure the full Local snapshots

* `-isSparseSnaps`: store the memory file as a sparse file to make its storage size closer to the actual size of the memory utilized by the microVM, rather than the memory allocated to the microVM
* `-snapsStorageSize [capacityGiB]`: specify the amount of capacity that can be used to store snapshots
* `-netPoolSize [capacity]`: keep around a pool of [capacity] network devices that can be used by VMs to keep network creation off the cold start path
* `-netPoolSize [capacity]`: the amount of network devices in the network pool, which can be used by microVMs to keep the network initialization off the cold start path

## Remote snapshots
Rather than only using the snapshots available locally on a node, snapshots can also be transferred between nodes to potentially accelerate cold start times and reduce memory utilization, given that proper mechanisms are in place to minimize the snapshot network transfer latency. This could be done by storing snapshots in a global storage solution such as S3, or directly distributing snapshots between compute nodes. The full local snapshot functionality in vHive can be used to implement such functionality. To implement this, the container image used by the snapshotted microVM must be available on the local node where the remote snapshot will be restored. This container image can be used in combination with the filesystem changes stored in the snapshot patch file to create a device mapper snapshot that contains the root filesystem needed by the restored microVM. After recreating the root filesystem block device, the microVM can be created from the fetched memory file and microVM state similarly to how this is done for the full local snapshots.

0 comments on commit 9adb916

Please sign in to comment.